Neste post será apresentado o funcionamento do Rados Gateway de forma prática.

1)Introdução

O Rados Gateway é um interface do Ceph onde disponibiliza uma API para acesso via S3 ou Swift para amazenamento de objetos , com isso muitos aplicativos conseguem escrevem de forma dinâmica dentro do cluster.

A imagem abaixo ilustra o conceito que estamos falando :



Nas documentações do Ceph é recomendendo que seja usado servidores físicos para configuração de uma camada de RGW , porém tudo vai de acordo com a carga do ambiente e estudos de performance.


2) Ambiente

Utilizamos o laboratório do site para configurarmos o cenário de estudo .

Levantamos também a vm rgw-node1

# vagrant up rgw-node1




3) Configuração do RGW-NODE1

Neste momento vamos configurar apenas um node para apresentarmos o funcionamento , em outros posts hands on será apresentado como configurar um ambiente balanceado.


Faça login na vm controller e do diretório de deploy execute a instalação

[ceph@controller ~]$ cd <DEPLOY DIRECTORY> 
[ceph@controller ~]$ ceph-deploy install --rgw rgw-node1
[ceph@controller ~]$ ceph-deploy admin rgw-node1
[ceph@controller ~]$ ceph-deploy rgw create rgw-node1



Atualize o arquivo /etc/ceph/ceph.conf  da vm rgw-node1 com as entradas abaixo

[client.radosgw.gateway]
host = rgw-node1
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw print continue = false



Crie a chave e o acesso do Rados GW

[ceph@rgw-node1 ~]$ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
[ceph@rgw-node1 ~]  sudo chmod +rw /etc/ceph/ceph.client.radosgw.keyring
[ceph@rgw-node1 ~] sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
 [ceph@rgw-node1 ~] sudo ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring 
[ceph@rgw-node1 ~] sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring



Restart o serviço e habilite para subir no boot

[ceph@rgw-node1 ~] sudo systemctl restart  ceph-radosgw@rgw.rgw-node1.service 
[ceph@rgw-node1 ~] sudo systemctl enable  ceph-radosgw@rgw.rgw-node1.service 



Valide a instalação chamando a url http://rgw-node1:7480

Output

<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
       <Owner>
               <ID>anonymous</ID>
               <DisplayName></DisplayName>
       </Owner>
       <Buckets>
       </Buckets>
</ListAllMyBucketsResult>



Criando o usuário de teste para Swift

[ceph@rgw-node1 ~]$ sudo radosgw-admin user create --subuser=testuser1:swift --display-name="Test User One" --key-type=swift --access=full


...
{
    "user_id": "testuser1",
    "display_name": "Test User One",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "testuser1:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [],
    "swift_keys": [
        {
            "user": "testuser1:swift",
            "secret_key": "UKVjylXZX7FlUlrdnCwMAbvxT9ERkb7gU0YRRPVH"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}



Criando o usuário para S3

[ceph@rgw-node1 ~]$ sudo radosgw-admin user create --subuser=testuser2:s3 --display-name="Test User Two" --key-type=s3 --access=full

...
{
    "user_id": "testuser2",
    "display_name": "Test User Two",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "testuser2:s3",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "testuser2:s3",
            "access_key": "4WOO9RC9YBFDLWX8O6WJ",
            "secret_key": "EIRadKpXMH3fWJEGolCT6pYOfsBNAWpcNBaLDT0K"
        },
        {
            "user": "testuser2:s3",
            "access_key": "FDEXSCOLLT0C44DFMPCV",
            "secret_key": "LF7A82p7cpaGNPaqj6Fj6AZYaTHQk5p8VeiFpwBd"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}




4) Acesso via Swift Api


Vamos instalar o client do Swift na vm client

[root@client ~]# easy_install pip
[root@client ~]# pip install --upgrade setuptools
[root@client ~]# pip install --upgrade python-swiftclient



Validando a configuração

# Criando o  bucket
[root@client ~]# swift -A http://rgw-node1:7480/auth/1.0 -U testuser1:swift -K UKVjylXZX7FlUlrdnCwMAbvxT9ERkb7gU0YRRPVH post bucket1-swift

# Listando buckets
[root@client ~]# swift -A http://rgw-node1:7480/auth/1.0 -U testuser1:swift -K UKVjylXZX7FlUlrdnCwMAbvxT9ERkb7gU0YRRPVH list
...
bucket1-swift

# Upload de arquivo
[root@client ~]# swift -A http://rgw-node1:7480/auth/1.0 -U testuser1:swift -K UKVjylXZX7FlUlrdnCwMAbvxT9ERkb7gU0YRRPVH upload bucket1-swift /etc/resolv.conf 
...
etc/resolv.conf

# Listando arquivos no Bucket
[root@client ~]# swift -A http://rgw-node1:7480/auth/1.0 -U testuser1:swift -K UKVjylXZX7FlUlrdnCwMAbvxT9ERkb7gU0YRRPVH list bucket1-swift





5) Acesso via S3 



Vamos instalar o repositório epel e o s3cmd na vm client

[root@client ~]# yum install -y  epel-release.noarch
[root@client ~]# yum install -y  s3cmd



Configure o s3cmd com o parâmetro –configure

[root@client ~]# s3cmd --configure

...

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 4WOO9RC9YBFDLWX8O6WJ
Secret Key: EIRadKpXMH3fWJEGolCT6pYOfsBNAWpcNBaLDT0K
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw-node1.lab.cephbrasil.com:7480

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket).rgw-node1.lab.cephbrasil.com:7480

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program [/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 

New settings:
  Access Key: 4WOO9RC9YBFDLWX8O6WJ
  Secret Key: EIRadKpXMH3fWJEGolCT6pYOfsBNAWpcNBaLDT0K
  Default Region: US
  S3 Endpoint: rgw-node1.lab.cephbrasil.com:7480
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket).rgw-node1.lab.cephbrasil.com:7480
  Encryption password: 
  Path to GPG program: /bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'



Validando o Bucket S3

# Criando o bucket
[root@client ~]# s3cmd mb s3://bucket1
...
Bucket 's3://bucket1/' created


# Listando os buckets
[root@client ~]# s3cmd ls 
...
2018-10-30 03:44  s3://bucket1

# Fazendo upload de arquivo 
[root@client ~]# s3cmd put /etc/resolv.conf s3://bucket1
...
upload: '/etc/resolv.conf' -> 's3://bucket1/resolv.conf'  [1 of 1]

# Listando arquivos 
[root@client ~]# s3cmd ls s3://bucket1 
...
2018-10-30 03:44        93   s3://bucket1/resolv.conf

Translate »