sciencemesh
helm repo to your client sources:helm repo add sciencemesh https://sciencemesh.github.io/charts/
wget -q https://raw.githubusercontent.com/cs3org/reva/master/examples/standalone/standalone.toml
# Example edits for the CERN deployment:
sed -i '/^\[grpc.services.gateway\]/a datagateway = "https://sciencemesh.cernbox.cern.ch/iop/datagateway"' standalone.toml
sed -i '/^\[grpc.services.storageprovider\]/a data_server_url = "https://sciencemesh.cernbox.cern.ch/iop/data"' standalone.toml
key | value |
---|---|
grpc.services.gateway.datagateway | Set to our externally-accessible Data Gateway (/datagateway ) |
grpc.services.storageprovider.data_server_url | Points to the external endpoint for the Data Server (/data ) |
wget -q https://raw.githubusercontent.com/cs3org/reva/master/examples/ocm-partners/providers.demo.json
# Get the CERN users, for instance:
wget -q https://raw.githubusercontent.com/cs3org/reva/master/examples/ocm-partners/users-cern.json
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
annotation can be supplied to expose GRPC services in a very easy way.FIXME: to be moved into configuration
To configure the two ingress resources that expose the IOP endpoints (GRPC and HTTP), we’ll just need to pass a few values into a custom-ingress.yaml
file. For instance, a configuration for a cluster running the nginx-ingress controller would be similar to:
cat << EOF > custom-ingress.yaml
gateway:
ingress:
enabled: true
services:
grpc:
hostname: <hostname>
path: /
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
tls:
- secretName: <keypair>
hosts:
- <hostname>
http:
hostname: <hostname>
path: /iop(/|$)(.*)
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-body-size: 200m
tls:
- secretName: <keypair>
hosts:
- <hostname>
EOF
<hostname>
is the domain you’ll use to expose the IOP publicly.<keypair>
is a Kubernetes tls secret created from the .key
and .crt
files. It can be omitted together with both tls
sections to expose the services without TLS-termination. Also note that the secret must be present in the cluster before deploying the IOP:kubectl create secret tls <keypair> --key=tls.key --cert=tls.crt
Once all this is done, we can carry on with the deployment by running:
helm upgrade -i iop sciencemesh/iop \
--set-file gateway.configFiles.revad\\.toml=standalone.toml \
--set-file gateway.configFiles.users\\.json=users-cern.json \
--set-file gateway.configFiles.ocm-providers\\.json=providers.demo.json \
-f custom-ingress.yaml
FIXME: to be moved into configuration
You can easily test your deployment is reachable outside the cluster by running the reva
cli and curl
against the exposed services:
# Configure the REVA cli client to connect to your GRPC service:
reva configure
host: <hostname>:443
config saved in /.reva.config
# Log-in using any of the users provided in gateway.configFiles.users.json
reva login -list
Available login methods:
- basic
reva login basic
username: ishank
password: ishankpass
OK
# HTTP: Query the Prometheus metrics endpoint:
curl https://<hostname>/iop/metrics
FIXME: DA: I suppose this is kubernetes-specific and should stay here?
In case you need to keep the data stored on the storage
service root
, across version upgrades and restarts of an IOP deployment, you will need to enable data persistency through a Kubernetes Persistent Volume (PV). This is done by using a Persistent Volume Claim (PVC). By default, persistency is disabled for convenience as it involves setting up a StorageClass
, having an available driver for your storage infrastructure, etc.
The cs3org/revad
chart provides two methods to attach a volume to an IOP deployment:
When persistentVolume.enabled=true
alone is passed, helm generates and installs PVC manifest by relying on some cluster and chart preset defaults. This option is especially useful to quickly deploy the IOP for the first time, without spending too much effort in the storage configurations.
For a full reference on the different persistentVolume
configurations available, refer to the chart parameters list.
This option is key when rolling an upgrade in the cluster while keeping all the data from a previous version. Here’s a really simple PVC manifest and the workflow to create and consume it from the charts.
cat << EOF > pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: iop-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: standard
EOF
kubectl apply -f pvc.yaml
# Note the 'Unbound' status for the PVC as there's still no deployment exercising the claim
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
iop-data Unbound pvc-fddca20b-69a4-43ec-ad12-6d4e2bd4a433 1Gi RWO standard 1d20h
helm upgrade -i iop sciencemeshcharts/iop \
--set gateway.persistentVolume.enabled=true \
--set gateway.persistentVolume.existingClaim=iop-data
# Get the PV provisioned by the claim
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-fddca20b-69a4-43ec-ad12-6d4e2bd4a433 1Gi RWO Delete Bound default/iop-data standard 1d20h
If the PVC was auto-provisioned by a previous release, you’ll need to pass its name (i.e. <release-name>-gateway
) as persistentVolume.existingClaim
, as part of the helm upgrade
command.
After deployment, continue by configuring Reva.