/ golang

Microservices in Golang - part 3 - docker-compose and datastores

CURRENTLY BEING UPDATED

用中文阅读.

Sponsor me on Patreon to support more content like this.

In the previous post, we covered some of the basics of go-micro and Docker. We also introduced a second service. In this post, we're going to look at docker-compose, and how we can run our services together locally a little easier. We're going to introduce some different databases, and finally we'll introduce a third service into the mix.

Prerequisites

Install docker-compose: https://docs.docker.com/compose/install/

But first, let's look at databases.

Choosing a database

So far our data isn't actually stored anywhere, it's stored in memory in our services, which is then lost when our containers are restarted. So of course we need a way of persisting, storing and querying our data.

The beauty of microservices, is that you can use different databases per service. Of course you don't have to do this, and many people don't. In fact I rarely do for small teams as it's a bit of a mental leap to maintain several different databases, than just one. But in some cases, one services data, might not fit the database you've used for your other services. So it makes sense to use something else. Microservices makes this trivially easy as your concerns are completely separate.

Choosing the 'correct' database for your services is an entirely different article, this one for example, so we wont go into too much detail on this subect. However, I will say that if you have fairly loose or inconsistent datasets, then a NoSQL document store solution is perfect. They're much more flexible with what you can store and work well with json. We'll be using MongoDB for our NoSQL database. No particular reason other than it performs well, it's widely used and supported and has a great online community.

If your data is more strictly defined and relational by nature, then it can makes sense to use a traditional rdbms, or relational database. But there really aren't any hard rules, generally any will do the job. But be sure to look at your data structure, consider whether your service is doing more reading or more writing, how complex the queries will be, and try to use those as a starting point in choosing your databases. For our relational database, we'll be using Postgres. Again, no particular reason other than it does the job well and I'm familiar with it. You could use MySQL, MariaDB, or something else.

Amazon and Google both have some fantastic on premises solution for both of these database types as well, if you wanted to avoid managing your own databases (generally advisable). Another great option is compose, who will spin up fully managed, scalable instances of various database technologies, using the same cloud provider as your services to avoid connection latency.

Amazon:
RDBMS: https://aws.amazon.com/rds/
NoSQL: https://aws.amazon.com/dynamodb/

Google:
RDBMS: https://cloud.google.com/spanner/
NoSQL: https://cloud.google.com/datastore/

Now that we've discussed databases a little, let's do some coding!

docker-compose

In the last part in the series we looked at Docker, which let us run our services in light-weight containers with their own run-times and dependencies. However, it's getting slightly cumbersome to have to run and manage each service with a separate Makefile. So let's take a look at docker-compose. Docker-compose allows you do define a list of docker containers in a yaml file, and specify metadata about their run-time. Docker-compose services map more or less to the same docker commands we're already using. For example:

$ docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 -e MICRO_REGISTRY=mdns shippy-service-vessel

Becomes:

version: '3.1'

services: 
  shippy-service-vessel:
    build: ./shippy-service-vessel
    ports:
      - 50052:50051
    environment:
      MICRO_SERVER_ADDRESS: ":50051"

Easy!

Our approach will be to create docker-compose files for each service, but we'll allow them all to share and join the same network, so that they can communicate without being in the same project.

So let's create a docker-compose file in the root of our consignment service directory $ touch docker-compose.yml. Now add our services:

# docker-compose.yml
version: '3.5'

services:
  shippy-service-consignment:
    restart: always # ensures the container will restart on crash
    container_name: "shippy-service-consignment"
    build: .
    ports:
      - 50051 # exposing this port on the docker network only, not host
    links:
      - datastore
    depends_on:
      - datastore
    networks:
      - shippy-backend-tier
      - consignment-tier
    environment:
      DB_HOST: "mongodb://datastore:27017"
      MICRO_ADDRESS: ":50051"

  datastore:
    image: mongo:latest
    container_name: "datastore"
    environment:
      - MONGO_DATA_DIR=/data/db
      - MONGO_LOG_DIR=/dev/null
    volumes:
      - ./data/db:/data/db # ensures data persistence between restarting
    networks:
      - consignment-tier
    ports:
      - 27017
    command: mongod --logpath=/dev/null

networks:
  consignment-tier:
    name: consignment-tier
  shippy-backend-tier:
    name: shippy-backend-tier

First we define the version of docker-compose we want to use, then a list of services. There are other root level definitions such as networks and volumes, but we'll just focus on services for now.

First we define our service, including any environment variables etc. Then we define our database, using the official mongodb image.

Each service is defined by its name, then we include a build path, which is a reference to a location, which should contain a Dockerfile. This tells docker-compose to use this Dockerfile to build its image. You can also use image here to use a pre-built image. Which we will be doing later on. Then you define your port mappings, and finally your environment variables.

To build your docker-compose stack, simply run $ docker-compose build, and to run it, $ docker-compose run. To run your stack in the background, use $ docker-compose up -d. You can also view a list of your currently running containers at any point using $ docker ps. Finally, you can stop all of your current containers by running $ docker stop $(docker ps -qa).

Finally we create two networks, one for our entire back-end ecosystem, and one for our locally scoped service. The shippy-backend-tier, we'll use to allow our services to talk to one another, the shippy-consignment service, we'll use to create a private network between our service and our database. You don't have to do this, you could of course create them all in the same network, but it's good to think about, and to practice good network security.

Let's test it all worked by running our CLI tool. To run it through docker-compose, simply run $ docker-compose build && docker-compose run in our cli project, once all of the other containers are running. You should see it run successfully, just as before.

Let's start hooking up our first service, our consignment service. I feel as though we should do some tidying up first. We've lumped everything into our main.go file. These are microservices, but that's no excuse to be messy! So let's create two more files in shippy-service-consignment, handler.go, datastore.go, and repository.go. I'm creating these within the root of our service, rather than creating them as new packages and directories. This is perfectly adequate for a small microservice.

Here's a great article on organising Go codebases.

Let's start by removing all of the repository code from our main.go and re-purpose it to use the mongodb library:

// shippy-service-consignment/repository.go
package main

import (
	"context"
	pb "github.com/<YourUsername>/shippy-service-consignment/proto/consignment"
	"go.mongodb.org/mongo-driver/mongo"
)

type Consignment struct {
	ID string `json:"id"`
	Weight int32 `json:"weight"`
	Description string `json:"description"`
	Containers Containers `json:"containers"`
	VesselID string `json:"vessel_id"`
}

type Container struct {
	ID string `json:"id"`
	CustomerID string `json:"customer_id"`
	UserID string `json:"user_id"`
}

type Containers []*Container

func MarshalContainerCollection(containers []*pb.Container) []*Container {
	collection := make([]*Container, 0)
	for _, container := range containers {
		collection = append(collection, MarshalContainer(container))
	}
	return collection
}

func UnmarshalContainerCollection(containers []*Container) []*pb.Container {
	collection := make([]*pb.Container, 0)
	for _, container := range containers {
		collection = append(collection, UnmarshalContainer(container))
	}
	return collection
}

func UnmarshalConsignmentCollection(consignments []*Consignment) []*pb.Consignment {
	collection := make([]*pb.Consignment, 0)
	for _, consignment := range consignments {
		collection = append(collection, UnmarshalConsignment(consignment))
	}
	return collection
}

func UnmarshalContainer(container *Container) *pb.Container {
	return &pb.Container{
		Id: container.ID,
		CustomerId: container.CustomerID,
		UserId: container.UserID,
	}
}

func MarshalContainer(container *pb.Container) *Container {
	return &Container{
		ID:         container.Id,
		CustomerID: container.CustomerId,
		UserID:     container.UserId,
	}
}

// Marshal an input consignment type to a consignment model
func MarshalConsignment(consignment *pb.Consignment) *Consignment {
	containers := MarshalContainerCollection(consignment.Containers)
	return &Consignment{
		ID: consignment.Id,
		Weight: consignment.Weight,
		Description: consignment.Description,
		Containers: containers,
		VesselID: consignment.VesselId,
	}
}

func UnmarshalConsignment(consignment *Consignment) *pb.Consignment {
	return &pb.Consignment{
		Id: consignment.ID,
		Weight: consignment.Weight,
		Description: consignment.Description,
		Containers: UnmarshalContainerCollection(consignment.Containers),
		VesselId: consignment.VesselID,
	}
}

type repository interface {
	Create(ctx context.Context, consignment *Consignment) error
	GetAll(ctx context.Context) ([]*Consignment, error)
}

// MongoRepository implementation
type MongoRepository struct {
	collection *mongo.Collection
}

// Create -
func (repository *MongoRepository) Create(ctx context.Context, consignment *Consignment) error {
	_, err := repository.collection.InsertOne(ctx, consignment)
	return err
}

// GetAll -
func (repository *MongoRepository) GetAll(ctx context.Context) ([]*Consignment, error) {
	cur, err := repository.collection.Find(ctx, nil, nil)
	var consignments []*Consignment
	for cur.Next(ctx) {
		var consignment *Consignment
		if err := cur.Decode(&consignment); err != nil {
			return nil, err
		}
		consignments = append(consignments, consignment)
	}
	return consignments, err
}

So there we have our code responsible for interacting with our Mongodb database. You will notice we include marshalling and unmarshalling functions, for converting between the protobuf definition generated structs, and our internal datastore models. You can in theory use the generated structs as your models also, but this isn't recommended from a software design perspective. As you are now coupling your data model, to your delivery layer. It's good to maintain these fences between the various responsibilities in your software. It may seem like additional overhead, but this is important for the extensibility of your software.

We'll need to create the code that creates the master session/connection. Update consignment-service/datastore.go with the following:

// shippy-service-consignment/datastore.go
package main

import (
	"context"
	"go.mongodb.org/mongo-driver/mongo"
	"go.mongodb.org/mongo-driver/mongo/options"
	"time"
)

// CreateClient -
func CreateClient(ctx context.Context, uri string, retry int32) (*mongo.Client, error) {
	conn, err := mongo.Connect(ctx, options.Client().ApplyURI(uri))
	if err := conn.Ping(ctx, nil); err != nil {
		if retry >= 3 {
			return nil, err
		}
		retry = retry + 1
		time.Sleep(time.Second * 2)
		return CreateClient(ctx, uri, retry)
	}

	return conn, err
}

Here we're creating a connection, using a given connection string, then we're 'pinging' the connection to check it's correct and that the datastore is available. We're then including some basic retry logic, by calling itself again if it can't connect. If it exceeds three retries, we let the error bubble upwards to be handled.

Now let's refactor our main.go file:

// shippy-service-consignment/main.go
package main

import (
	"context"
	"fmt"
	pb "github.com/<YourUsername>/shippy-service-consignment/proto/consignment"
	vesselProto "github.com/<YourUsername>/shippy-service-vessel/proto/vessel"
	"github.com/micro/go-micro"
	"log"
	"os"
)

const (
	defaultHost = "datastore:27017"
)

func main() {
	// Set-up micro instance
	srv := micro.NewService(
		micro.Name("shippy.service.consignment"),
	)

	srv.Init()

	uri := os.Getenv("DB_HOST")
	if uri == "" {
		uri = defaultHost
	}

	client, err := CreateClient(context.Background(), uri, 0)
	if err != nil {
		log.Panic(err)
	}
	defer client.Disconnect(context.Background())

	consignmentCollection := client.Database("shippy").Collection("consignments")

	repository := &MongoRepository{consignmentCollection}
	vesselClient := vesselProto.NewVesselServiceClient("shippy.service.client", srv.Client())
	h := &handler{repository, vesselClient}

	// Register handlers
	pb.RegisterShippingServiceHandler(srv.Server(), h)

	// Run the server
	if err := srv.Run(); err != nil {
		fmt.Println(err)
	}
}

The final bit of tidying up we need to do is to move our gRPC handler code out into our new handler.go file. So let's do that.

// shippy-service-consignment/handler.go
package main

import (
	"context"
	"github.com/pkg/errors"
	pb "github.com/<YourUsername>/shippy-service-consignment/proto/consignment"
	vesselProto "github.com/<YourUsername>/shippy-service-vessel/proto/vessel"
)

type handler struct {
	repository
	vesselClient vesselProto.VesselServiceClient
}

// CreateConsignment - we created just one method on our service,
// which is a create method, which takes a context and a request as an
// argument, these are handled by the gRPC server.
func (s *handler) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {

	// Here we call a client instance of our vessel service with our consignment weight,
	// and the amount of containers as the capacity value
	vesselResponse, err := s.vesselClient.FindAvailable(ctx, &vesselProto.Specification{
		MaxWeight: req.Weight,
		Capacity: int32(len(req.Containers)),
	})
	if vesselResponse == nil {
		return errors.New("error fetching vessel, returned nil")
	}

	if err != nil {
		return err
	}

	// We set the VesselId as the vessel we got back from our
	// vessel service
	req.VesselId = vesselResponse.Vessel.Id

	// Save our consignment
	if err = s.repository.Create(ctx, MarshalConsignment(req)); err != nil {
		return err
	}

	res.Created = true
	res.Consignment = req
	return nil
}

// GetConsignments -
func (s *handler) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
	consignments, err := s.repository.GetAll(ctx)
	if err != nil {
		return err
	}
	res.Consignments = UnmarshalConsignmentCollection(consignments)
	return nil
}

Now let's do the same to your vessel-service. I'm not going to demonstrate this in this post, you should have a good feel for it yourself at this point. Remember you can use my repository as a reference.

We will however add a new method to our vessel-service, which will allow us to create new vessels. As ever, let's start by updating our protobuf definition:

syntax = "proto3";

package vessel;

service VesselService {
  rpc FindAvailable(Specification) returns (Response) {}
  rpc Create(Vessel) returns (Response) {}
}

message Vessel {
  string id = 1;
  int32 capacity = 2;
  int32 max_weight = 3;
  string name = 4;
  bool available = 5;
  string owner_id = 6;
}

message Specification {
  int32 capacity = 1;
  int32 max_weight = 2;
}

message Response {
  Vessel vessel = 1;
  repeated Vessel vessels = 2;
  bool created = 3;
}

We created a new Create method under our gRPC service, which takes a vessel and returns our generic response. We've added a new field to our response message as well, just a created bool. Re-build your proto definition to update this service. Now we'll add a new handler in shippy-service-vessel/handler.go and a new repository method:

@todo - update

// vessel-service/handler.go
package main

import (
	"context"
	pb "github.com/<YourUsername>/shippy-service-vessel/proto/vessel"
)

type handler struct {
	repository
}

// FindAvailable vessels
func (s *handler) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error {

	// Find the next available vessel
	vessel, err := s.repository.FindAvailable(ctx, MarshalSpecification(req))
	if err != nil {
		return err
	}

	// Set the vessel as part of the response message type
	res.Vessel = UnmarshalVessel(vessel)
	return nil
}

// Create a new vessel
func (s *handler) Create(ctx context.Context, req *pb.Vessel, res *pb.Response) error {
	if err := s.repository.Create(ctx, MarshalVessel(req)); err != nil {
		return err
	}
	res.Vessel = req
	return nil
}

// shippy/shippy-service-vessel/repository.go
package main

import (
	"context"
	pb "github.com/EwanValentine/shippy-service-vessel/proto/vessel"
	"go.mongodb.org/mongo-driver/bson"
	"go.mongodb.org/mongo-driver/mongo"
)

type repository interface {
	FindAvailable(ctx context.Context, spec *Specification) (*Vessel, error)
	Create(ctx context.Context, vessel *Vessel) error
}

type VesselRepository struct {
	collection *mongo.Collection
}

type Specification struct {
	Capacity int32
	MaxWeight int32
}

func MarshalSpecification(spec *pb.Specification) *Specification {
	return &Specification{
		Capacity: spec.Capacity,
		MaxWeight: spec.MaxWeight,
	}
}

func UnmarshalSpecification(spec *Specification) *pb.Specification {
	return &pb.Specification{
		Capacity: spec.Capacity,
		MaxWeight: spec.MaxWeight,
	}
}

func MarshalVessel(vessel *pb.Vessel) *Vessel {
	return &Vessel{
		ID: vessel.Id,
		Capacity: vessel.Capacity,
		MaxWeight: vessel.MaxWeight,
		Name: vessel.Name,
		Available: vessel.Available,
		OwnerID: vessel.OwnerId,
	}
}

func UnmarshalVessel(vessel *Vessel) *pb.Vessel {
	return &pb.Vessel{
		Id: vessel.ID,
		Capacity: vessel.Capacity,
		MaxWeight: vessel.MaxWeight,
		Name: vessel.Name,
		Available: vessel.Available,
		OwnerId: vessel.OwnerID,
	}
}

type Vessel struct {
	ID string
	Capacity int32
	Name string
	Available bool
	OwnerID string
	MaxWeight int32
}

// FindAvailable - checks a specification against a map of vessels,
// if capacity and max weight are below a vessels capacity and max weight,
// then return that vessel.
func (repository *VesselRepository) FindAvailable(ctx context.Context, spec *Specification) (*Vessel, error) {
	filter := bson.D{{
		"capacity",
		bson.D{{
			"$lte",
			spec.Capacity,
		}, {
			"$lte",
			spec.MaxWeight,
		}},
	}}
	vessel := &Vessel{}
	if err := repository.collection.FindOne(ctx, filter).Decode(vessel); err != nil {
		return nil, err
	}
	return vessel, nil
}

// Create a new vessel
func (repository *VesselRepository) Create(ctx context.Context, vessel *Vessel) error {
	_, err := repository.collection.InsertOne(ctx, vessel)
	return err
}

Now we can create vessels! I've update the main.go to use our new Create method to store our dummy data, see here.

So after all of that. We have updated our services to use Mongodb. Before we try to run this, we will need to update our docker-compose file to include a Mongodb container:

services: 
  ... 
  datastore:
    image: mongo
    ports:
      - 27017:27017

And update the environment variables in your two services to include: DB_HOST: "datastore:27017". Notice, we're calling datastore as our host name, and not localhost for example. This is because docker-compose handles some clever internal DNS stuff for us.

So you should have:

version: '3.1'

services:

  consignment-cli:
    build: ./consignment-cli

  consignment-service:
    build: ./shippy-service-consignment
    ports:
      - 50051:50051
    environment:
      MICRO_ADDRESS: ":50051"
      DB_HOST: "datastore:27017"

  vessel-service:
    build: ./shippy-service-vessel
    ports:
      - 50052:50051
    environment:
      MICRO_ADDRESS: ":50051"
      DB_HOST: "datastore:27017"

  datastore:
    image: mongo
    ports:
      - 27017:27017

Re-build your stack $ docker-compose build and re-run it $ docker-compose up. Note, sometimes because of Dockers caching, you may need to run a cacheless build to pick up certain changes. To do this in docker-compose, simply use the --no-cache flag when running $ docker-compose build.

User service

Now let's create a third service. We'll start by updating our docker-compose.yml file. Also, to mix things up a bit, we'll add Postgres to our docker stack for our user service:

  ...
  user-service:
    build: ./shippy-service-user
    ports:
      - 50053:50051
    environment:
      MICRO_ADDRESS: ":50051"

  ...
  database:
    image: postgres
    ports:
      - 5432:5432

Now create a user-service directory in your root. And, as per the previous services. Create the following files: handler.go, main.go, repository.go, database.go, Dockerfile, Makefile, a sub-directory for our proto files, and finally the proto file itself: proto/user/user.proto.

Add the following to user.proto:

syntax = "proto3";

package user;

service UserService {
    rpc Create(User) returns (Response) {}
    rpc Get(User) returns (Response) {}
    rpc GetAll(Request) returns (Response) {}
    rpc Auth(User) returns (Token) {}
    rpc ValidateToken(Token) returns (Token) {}
}

message User {
    string id = 1;
    string name = 2;
    string company = 3;
    string email = 4;
    string password = 5;
}

message Request {}

message Response {
    User user = 1;
    repeated User users = 2;
    repeated Error errors = 3;
}

message Token {
    string token = 1;
    bool valid = 2;
    repeated Error errors = 3;
}

message Error {
    int32 code = 1;
    string description = 2;
}

Now, ensuring you've created a Makefile similar to that of our previous services, you should be able to run $ make build to generate our gRPC code. As per our previous services, we've created some code to interface our gRPC methods. We're only going to make a few of them work in this part of the series. We just want to be able to create and fetch a user. In the next part of the series, we'll be looking at authentication and JWT. So we'll be leaving anything token related for now. Your handlers should look like this:

// @todo - update

// user-service/handler.go
package main

import (
	"golang.org/x/net/context"
	pb "github.com/EwanValentine/shippy/user-service/proto/user"
)

type service struct {
	repo Repository
	tokenService Authable
}

func (srv *service) Get(ctx context.Context, req *pb.User, res *pb.Response) error {
	user, err := srv.repo.Get(ctx, req.Id)
	if err != nil {
		return err
	}
	res.User = user
	return nil
}

func (srv *service) GetAll(ctx context.Context, req *pb.Request, res *pb.Response) error {
	users, err := srv.repo.GetAll(ctx)
	if err != nil {
		return err
	}
	res.Users = users
	return nil
}

func (srv *service) Auth(ctx context.Context, req *pb.User, res *pb.Token) error {
	user, err := srv.repo.GetByEmailAndPassword(req)
	if err != nil {
		return err
	}
	res.Token = "testingabc"
	return nil
}

func (srv *service) Create(ctx context.Context, req *pb.User, res *pb.Response) error {
	if err := srv.repo.Create(req); err != nil {
		return err
	}
	res.User = req
	return nil
}

func (srv *service) ValidateToken(ctx context.Context, req *pb.Token, res *pb.Token) error {
	return nil
}

Now let's add our repository code:

// user-service/repository.go
package main

import (
	pb "github.com/EwanValentine/shippy/user-service/proto/user"
	"github.com/jinzhu/gorm"
)

type Repository interface {
	GetAll(ctx context.Context) ([]*User, error)
	Get(ctx context.Context, id string) (*User, error)
	Create(ctx context.Context, user *pb.User) error
	GetByEmailAndPassword(ctx context.Context, user *User) (*User, error)
}

type UserRepository struct {
	db *gorm.DB
}

func (repo *UserRepository) GetAll(ctx context.Context) ([]*User, error) {
	var users []*User
	if err := repo.db.Find(&users).Error; err != nil {
		return nil, err
	}
	return users, nil
}

func (repo *UserRepository) Get(ctx context.Context, id string) (*User, error) {
	var user *User
	user.Id = id
	if err := repo.db.First(&user).Error; err != nil {
		return nil, err
	}
	return user, nil
}

func (repo *UserRepository) GetByEmailAndPassword(user *pb.User) (*pb.User, error) {
	if err := repo.db.First(&user).Error; err != nil {
		return nil, err
	}
	return user, nil
}

func (repo *UserRepository) Create(user *pb.User) error {
	if err := repo.db.Create(user).Error; err != nil {
		return err
	}
}

We also need to change our ORM's behaviour to generate a UUID on creation, instead of trying to generate an integer ID. In case you didn't know, a UUID is a randomly generated set of hyphenated strings, used as an ID or primary key. This is more secure than just using auto-incrementing ID's, because it stops people from guessing or traversing through your API endpoints. MongoDB already uses a variation of this, but we need to tell our Postgres models to use UUID's. So in user-service/proto/user create a new file called extensions.go, in that file, add:

package user

import (
	"github.com/jinzhu/gorm"
	"github.com/satori/go.uuid"
)

func (model *User) BeforeCreate(scope *gorm.Scope) error {
	uuid := uuid.NewV4()
	return scope.SetColumn("Id", uuid.String())
}

This hooks into GORM's event lifecycle so that we generate a UUID for our Id column, before the entity is saved.

You'll notice here, unlike our Mongodb services, we're not doing any connection handling. The native, SQL/postgres drivers work slightly differently, so we don't need to worry about that this time. We're using a package called 'gorm', let's touch on this briefly.

Gorm - Go + ORM

Gorm is a reasonably light-weight object relational mapper, which works nicely with Postgres, MySQL, Sqlite etc. It's very easy to set-up, use and manages your database schema changes automatically.

That being said, with microservices, your data structures are much smaller, contain less joins and overall complexity. So don't feel as though you should use an ORM of any kind.

We need to be able to test creating a user, so let's create another cli tool. This time user-cli in our project root. Similar as our consignment-cli, but this time:

package main

import (
	"log"
	"os"

	pb "github.com/EwanValentine/shippy/user-service/proto/user"
	microclient "github.com/micro/go-micro/client"
	"github.com/micro/go-micro/cmd"
	"golang.org/x/net/context"
	"github.com/micro/cli"
	"github.com/micro/go-micro"
)


func main() {

	cmd.Init()

	// Create new greeter client
	client := pb.NewUserServiceClient("go.micro.srv.user", microclient.DefaultClient)
    
    // Define our flags
	service := micro.NewService(
		micro.Flags(
			cli.StringFlag{
				Name:  "name",
				Usage: "You full name",
			},
			cli.StringFlag{
				Name:  "email",
				Usage: "Your email",
			},
			cli.StringFlag{
				Name:  "password",
				Usage: "Your password",
			},
			cli.StringFlag{
				Name: "company",
				Usage: "Your company",
			},
		),
	)
    
    // Start as service
	service.Init(

		micro.Action(func(c *cli.Context) {

			name := c.String("name")
			email := c.String("email")
			password := c.String("password")
			company := c.String("company")

            // Call our user service
			r, err := client.Create(context.TODO(), &pb.User{
				Name: name,
				Email: email,
				Password: password,
				Company: company,
			})
			if err != nil {
				log.Fatalf("Could not create: %v", err)
			}
			log.Printf("Created: %s", r.User.Id)

			getAll, err := client.GetAll(context.Background(), &pb.Request{})
			if err != nil {
				log.Fatalf("Could not list users: %v", err)
			}
			for _, v := range getAll.Users {
				log.Println(v)
			}

			os.Exit(0)
		}),
	)

	// Run the server
	if err := service.Run(); err != nil {
		log.Println(err)
	}
}

Here we've used go-micro's command line helper, which is really neat.

We can run this and create a user:

$ docker-compose run user-cli command \
  --name="Ewan Valentine" \
  --email="ewan.valentine[email protected]" \
  --password="Testing123" \
  --company="BBC"

And you should see the created user in a list!

This isn't very secure, as currently we're storing plain-text passwords, but in the next part of the series, we'll be looking at authentication and JWT tokens across our services.

So there we have it, we've created an additional service, an additional command line tool, and we've started to persist our data using two different database technologies. We've covered a lot of ground in this post, and apologies if we went over anything too quickly, covered too much or assumed too much knowledge. Please refer to the git repo and as ever, please do send me your feedback!

If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine

Or, sponsor me on Patreon to support more content like this.