Ops
Search
K

Google Cloud Integration

Ops can integrate with your existing Google Cloud Platform (GCP) account. You can use Ops CLI to create and upload an image in GCP account. Once, you have uploaded image, you can also create an instance with a particular image using CLI.
By using the gcp klib it is possible to send memory usage metrics to the GCP monitoring service, thus emulating the GCP ops agent.

Pre-requisites

  1. 1.
    Create a Service Account (SA) in your GCP account and download the Service Account key json file.
  2. 2.
    Please make sure your Service Account has access to the Google Compute Engine and Google Storage.
  3. 3.
    Get the name of your Google Cloud account project where you would be creating images and instances.
  4. 4.
    Create a bucket in Google Cloud storage for image artifacts storage.
  5. 5.
    Please make sure you export GOOGLE_APPLICATION_CREDENTIALS with the Service Account key json file path, before invoking below commands.
$ export GOOGLE_APPLICATION_CREDENTIALS=~/service-key.json

Image Operations

Create Image

If you have already created an Ops project, you can update your existing config.json. You need to add CloudConfig which mentions specific details like ProjectID, BucketName.
{
"CloudConfig": {
"ProjectID": "prod-1000",
"Zone": "us-west1-b",
"BucketName":"my-deploy"
},
"RunConfig": {
"Memory": "2G"
}
}
Once, you have updated config.json you can create an image in Google Cloud with the following command.
$ ops image create <elf_file|program> -c config.json -i <image_name> -t gcp
For creating an image using a particular package, you need to provide the package name to ops image create command with -p option.
$ ops image create -c config.json -p node_v14.2.0 -a ex.js -i <image_name> -t gcp
Nanos supports running ARM payloads on ARM instances but in order to do so you must build your image with an ARM instance type:
{
"CloudConfig": {
"Flavor":"t2a-standard-1"
}
}
Also note that this instance type is not supported in every region. You can try us-central1-a.

List Images

You can list existing images on Google cloud with ops image list.
$ ops image list
+--------------------+--------+-------------------------------+
| NAME | STATUS | CREATED |
+--------------------+--------+-------------------------------+
| nanos-main-image | READY | 2019-03-21T15:06:17.567-07:00 |
+--------------------+--------+-------------------------------+
| nanos-node-image | READY | 2019-04-16T23:16:03.145-07:00 |
+--------------------+--------+-------------------------------+
| nanos-server-image | READY | 2019-03-21T15:50:04.456-07:00 |
+--------------------+--------+-------------------------------+

Delete Image

ops image delete <imagename> can be used to delete an image from Google Cloud.
$ ops delete image nanos-main-image

Instance Operations

Create Instance

After the successful creation of an image in Google Cloud, we can create an instance from an existing image.
You need to export GOOGLE_APPLICATION_CREDENTIALS and pass project-id and zone with cli options.
$ export GOOGLE_APPLICATION_CREDENTIALS=<credentials_file_path>
$ ops instance create <image_name> -g prod-1000 -z us-west1-b -t gcp
Alternatively, you can pass config, if you have mentioned project-id and zone in project's config.json.
$ ops instance create <image_name> -t gcp -c config.json
You can provide list of ports to be exposed on gcp instance via config and command line.
CLI example
$ ops instance create <image_name> -t gcp -p prod-1000 -z us-west1-a --port 80 --port 443
Sample config
{
"CloudConfig" :{
"Platform" :"gcp",
"ProjectID" :"prod-1000",
"Zone": "us-west1-a",
"BucketName":"my-s3-bucket",
"InstanceProfile":"default"
},
"RunConfig": {
"Ports" : ["80", "443"]
},
"Klibs": ["gcp", "tls"],
"ManifestPassthrough": {
"gcp": {
"metrics": {"interval":"120"}
}
}
}

Spot Provisioning

You maybe enable spot provisioning using the following config:
{
"CloudConfig": {
"Spot": true
}
}

Disable SMT

You can disable SMT if you so desire by flagging 'ThreadsPerCore' to 1. By default Nanos will have acess to all vCPUs available but this setting can force it to only use one thread per core. Some performance, security or licensing concerns might benefit from this setting. It is important to note that not all instances allow this setting and you will still be billed for all vCPUs provisioned.
{
"RunConfig": {
"ThreadsPerCore": 1
}
}

AMD-SEV (Secure Encrypted Virtualization)

You can enable encryption in-use AMD-SEV on select flavors and regions in Google Cloud. This will generate an encryption key that is generated for each new vm.
{
"CloudConfig" : {
"ConfidentialVM": true,
"Flavor": "n2d-standard-2"
}
}

Private Static IP

By default, ops uses will rely on DHCP.
If you would like to set a static private ip you can use the following:
{
"RunConfig":{
"IPAddress": "172.31.33.7"
}
}
Note: You must choose an available IP that is within your chosen/default VPC.

IP Forwarding

By default, IP forwarding is disabled on GCP.
If you would like to enable IP forwarding when creating the instance you can use the following:
{
"RunConfig":{
"CanIPForward": true
}
}

GCP metrics - memory

The gcp klib emulates some functions of GCP ops agent to send memory usage metrics to the GCP monitoring service.
Example Ops configuration to enable sending memory metrics every 2 minutes:
{
"CloudConfig" :{
"Platform" :"gcp",
"ProjectID" :"prod-1000",
"Zone": "us-west1-a",
"BucketName":"my-s3-bucket",
"InstanceProfile":"default"
},
"Klibs": ["gcp", "tls"],
"ManifestPassthrough": {
"gcp": {
"metrics": {
"interval":"120"
}
}
}
}

GCP logging - console

The gcp klib implements a console driver that sends console output to GCP logs.
{
"CloudConfig" :{
"Platform" :"gcp",
"ProjectID" :"prod-1000",
"Zone": "us-west1-a",
"BucketName":"my-s3-bucket",
"InstanceProfile":"default"
},
"Klibs": ["gcp", "tls"],
"ManifestPassthrough": {
"gcp": {
"logging": {
"log_id": "my_log"
}
}
}
}

List Instances

You can list instance on Google Cloud using ops instance list command.
You need to export GOOGLE_APPLICATION_CREDENTIALS, GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_ZONE before firing command.
$ export GOOGLE_CLOUD_PROJECT=prod-1000
$ export GOOGLE_CLOUD_ZONE=us-west1-b
$ ops instance list
+-----------------------------+---------+-------------------------------+-------------+--------------+
| NAME | STATUS | CREATED | PRIVATE IPS | PUBLIC IPS |
+-----------------------------+---------+-------------------------------+-------------+--------------+
| nanos-main-image-1556601450 | RUNNING | 2019-04-29T22:17:34.609-07:00 | 10.240.0.40 | 34.83.204.40 |
+-----------------------------+---------+-------------------------------+-------------+--------------+
Alternatively you can pass project-id and zone with cli options.
$ ops instance list -g prod-1000 -z us-west1-b

Get Logs for Instance

You can get logs from serial console of a particular instance using ops instance logs command.
You need to export GOOGLE_APPLICATION_CREDENTIALS, GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_ZONE before firing command.
$ export GOOGLE_CLOUD_PROJECT=prod-1000
$ export GOOGLE_CLOUD_ZONE=us-west1-b
$ ops instance logs <instance_name> -t gcp
Alternatively you can pass project-id and zone with cli options.
$ ops instance logs -g prod-1000 -z us-west1-b
You may also tail the serial console using:
$ ops instance logs --watch my-running-instance -g prod-1000 -t gcp -z us-west2-a

Delete Instance

ops instance delete command can be used to delete instance on Google Cloud.
You need to export GOOGLE_APPLICATION_CREDENTIALS, GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_ZONE before firing command.
$ export GOOGLE_CLOUD_PROJECT=prod-1000
$ export GOOGLE_CLOUD_ZONE=us-west1-b
$ ops instance delete my-instance-running
Alternatively you can pass project-id and zone with cli options.
$ ops instance delete -g prod-1000 -z us-west1-b my-instance-running

Create Instance with Instance Group

OPS has initial support for putting an instance into an instance group. This allows you to load balance a handful of instances and scale up/down on demand.
The instance group must already be created to use this feature. When deploying through 'instance create' OPS will create a new instance template, apply it to the instance group, and then force re-create all the instances with the new instance template. The instance template will track any firewall rule changes (such as ports).
$ ops instance create <image_name> -t gcp -p prod-1000 -z us-west1-a --port 80 --port 443 --instance-group my-instance-group

Volume Operations

Create Volume

You need to set the BucketName, ProjectID and Zone in the CloudConfig section of your configuration file and export GOOGLE_APPLICATION_CREDENTIALS before firing the command.
{
"CloudConfig" :{
"ProjectID" :"prod-1000",
"Zone": "us-west1-b",
"BucketName":"my-deploy"
}
}
$ export GOOGLE_APPLICATION_CREDENTIALS=<credentials_file_path>
$ ops volume create <volume_name> -t gcp -c <configuration_file_path>
For create a volume with existing files you can add the -d flag and the directory path.
$ export GOOGLE_APPLICATION_CREDENTIALS=<credentials_file_path>
$ ops volume create <volume_name> -t gcp -c <configuration_file_path> -d <directory_path>

List Volumes

You can list volumes on Google Cloud using ops volume list -t gcp -c <configuration_file_path> command.
You need to set the ProjectID and Zone in the CloudConfig section of your configuration file and export GOOGLE_APPLICATION_CREDENTIALS before firing the command.
$ ops instance list -t gcp -c <configuration_file_path>
+-----------------------------+---------+-------------------------------+-------------+--------------+
| NAME | STATUS | CREATED | PRIVATE IPS | PUBLIC IPS |
+-----------------------------+---------+-------------------------------+-------------+--------------+
| nanos-main-image-1556601450 | RUNNING | 2019-04-29T22:17:34.609-07:00 | 10.240.0.40 | 34.83.204.40 |
+-----------------------------+---------+-------------------------------+-------------+--------------+

Delete Volume

ops volume delete command can be used to delete an instance on Google Cloud.
You need to set the ProjectID and Zone in the CloudConfig section of your configuration file and export GOOGLE_APPLICATION_CREDENTIALS before firing the command.
$ export GOOGLE_APPLICATION_CREDENTIALS=<credentials_file_path>
$ ops volume delete <volume_name> -t gcp -c <configuration_file_path>

Attach Volume

For attaching a volume you need a running instance using a image configured with a mount point. This means you have to create a volume before running the instance. After the volume created you have to specify the volume label with the same name of the volume created. You can create the image running the next command.
$ ops image create <elf_file|program> -i <image_name> -c config.json --mounts <volume_label>:<mount_path>
After having the instance running you can attach a volume using ops volume attach <instance_name> <volume_name> <volume_name> -t gcp -c <configuration_file_path>.
Note: You need to stop and start the instance to see the changes applied.

Detach Volume

You can detach a volume from a running instance using ops volume detach <instance_name> <volume_name> -t gcp -c <configuration_file_path>.

Networking Considerations

If you specify a port in your config you are stating you wish the public ip associated with the instance to be exposed with that port. If you don't specify the port by default the private ip allows any instance in the same vpc to talk to it.

Elastic IP:

If you have already provisioned an elastic ip you may use it by setting it in the Cloud Config:
{
"CloudConfig" :{
"StaticIP": "1.2.3.4"
}
}

IPV6 Networking

IPV6 support differs from cloud to cloud.
To use IPv6 on Google Cloud you must create a VPC and a subnet with IPv6 enabled. You can not use the legacy network nor can you use an auto-created subnet.
After you create a new VPC and subnet you can adjust the subnet to be dual stack like so:
$ gcloud compute networks subnets update mysubnet \
--stack-type=IPV4_IPV6 --ipv6-access-type=EXTERNAL --region=us-west2
When you create it you won't see in the UI that it is IPv6 enabled but you can click the 'REST' button to see it.
A sample config:
{
"CloudConfig" :{
"ProjectID": "my-project",
"Zone": "us-west2-a",
"BucketName":"nanos-test",
"EnableIPv6": true,
"VPC": "ipv6-test",
"Subnet": "ipv6-test"
},
"RunConfig": {
"Ports": [
"80",
"8080",
"443"
]
}
}
Be aware that you might not have IPV6 connectivity from the laptop/server you are testing from. You can verify within an instance on Google or some other IPv6 capable machine via telnet:
$ telnet 2600:1900:4120:1235:: 8080
or ping:
$ ping6 2600:1900:4120:1235::
Also, keep in mind that when you create a new VPC by default there are no firewall rules so things like ICMP (ping) won't work without adding them manually nor would ssh'ing into a test instance work without a corresponding rule on the new VPC for ssh (22).