Go E2E Tutorial Part 8 (END): Miscellaneous To Do

haRies Efrika
5 min readJul 30, 2023

In this final chapter we will discuss about several things not fully covered in our tutorial such as: machine to machine communication, configuration, external dependencies (like sentry and datadog), and graceful shutdown . The files are hosted here for your reference: https://github.com/hariesef/myschool

Link to previous chapter: https://hariesef.medium.com/go-e2e-tutorial-part-7-unit-testing-with-mongodb-mock-mtest-e32511961925

Machine-to-machine Communication

In microservice architecture, is a normal thing that a Service requests or update data to/from other services. If using Twirp, then the communication must be utilizing the protobuf client.

The question is … in our clean architecture diagram and folder structure, how we map these protobuf clients?

Communicating to other micro service is no different from a Service communicates to a Storage, or to Controller. They have to bind via interface. The client to connect to outside, must reside under implementation struct.

The proposal is

  • Wrap all protobuf clients under pkg/controller/rpc/external/external_iface.go
  • Implement the actual call under internal /controller/rpc/external/external_impl.go

This way Service1 and Service2 can be tested without online dependencies.

Other Common Dependencies

In almost all places we probably need to call these many times:

  • logger
  • sentry (in case major error happens)
  • sending datadog custom metrics
  • etc

The question is, how to bundle all of these and make them available everywhere? Do we have to pass them as objects to all places, repositories, services? 😅

Realized or not, the answer is available from how logger and sentry are actually implemented. Let’s have a look at logger (in our tutorial we are using logger from “github.com/dewanggasurya/logger/log”).

The idea is to use init() function inside that package, where it will initialize singular object using default constructor. If not initialized properly then any call to i.e. log.Debugf() will not do anything. Thus if not initialized in unit test suite, it won’t disturb the test, nor requiring actual connection to external parties like datadog or sentry case.

We also need to provide public function like log.Debugf() that can be called anywhere, and connects to default object.

If initialized properly i.e.

 loggerEngine := logger.Default().SetLevel(logger.Level(logger.DebugLevel)).SetTemplate(logger.DefaultTemplate())
logger.SetLogger(loggerEngine)

Then the logger starts doing its job.

Actually for logger and sentry this is not the issue since they have been implementing like mentioned. But what if like datadog or other library needed in many places where it actually makes I/O request?

It might be interesting to build new repository in github that can wrap datadog, for instance, and behaves like logger 😄 But while it doesn’t exist yet, at least in our application project, we can create i.e.

  • pkg/helper/datadogcentral/common.go

— where, if that package is imported, then any Go file can simply call i.e.

ddc.IncrementSuccess(metricName, eventName, 1)

which actually a wrapper for

 err = s.statsd.Increment(
metricName,
[]string{
fmt.Sprintf("event:%s", event),
},
1,
)

we we don’t need to care about the s.statsd object itself.

Alternative MongoDB Mocking Library

Before dealing with mtest, I actually picked up this one https://github.com/sv-tools/mongoifc

It basically wraps original mongo driver and provides generated mock files for unit testing. However the documentation is not complete and I am stuck when trying to emulate mongo.SingleResult{} as part of FindOne() operation. If you know how to do it, please post a comment below, thanks!

    ctx := context.Background()
col := gomockMocks.NewMockCollection(ctrl)

//BLOCKED. I am not able to find the correct way to construct SingleResult{}. Didnt find single example on google.
col.EXPECT().FindOne(ctx, gomock.Any()).Return(
&mongo.SingleResult{},
)
db.EXPECT().Collection(token.TokenCollectionName).Return(col).AnyTimes()

model, err := tokenRepoImpl.Find(context.TODO(), "123abc")

If you are interested to see the test file, please refer here: https://github.com/hariesef/myschool/blob/master/internal/storage/mongodb/token/token_repo_impl_test.go.old

Configuration Package

As part of best practices, configurations are read from environment variable. They are flexible and in CI/CD pipeline, then can secretly be injected with confidential values from AWS Secret Manager, for instance.

But in our project structure, who is responsible to provide configurations?

There is one idea, that is to centralize all needed configurations into single package, and then the object is used as dependency to all services, repos. In my opinion this is not clean.

Configurations, are package specific. For instance, type of configs needed by SQLite will be different with MongoDB. Also configuration for logger, or sentry, etc of course all of them are different.

Every package has to be responsible for their own configurations.

I came to this conclusion. Getting environment variables from OS is no rocket science and can be easily done in all packages, during initialization. The helper package is also provided in this tutorial, please refer to: https://github.com/hariesef/myschool/blob/master/pkg/helper/envar.go

It provides simple functions to retrieve environment variables to common types, with default value.

Example on a package is getting OS config is depicted on this file: https://github.com/hariesef/myschool/blob/master/internal/storage/mongodb/connection.go

const EnvMongoURI string = "MONGODB_URI"
const DefaultMongoURI string = "mongodb://localhost:27017"

func Connect() (*mongo.Database, *mongo.Client, error) {

mongoURI := helper.GetEnvString(EnvMongoURI, DefaultMongoURI)
opt := options.Client().ApplyURI(mongoURI)
localMongoClient, err := mongo.Connect(context.Background(), opt)
if err != nil {
return nil, nil, err
}

Graceful Shutdown

If we have a look at main file: https://github.com/hariesef/myschool/blob/master/cmd/server/main.go

— at the end we are waiting for Cntrl-C, or kill signal before proceeding with actual shutdown process:

 quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit

wg.Add(1)
go repo.Disconnect()
wg.Wait()
log.Infoln("Bye!")

}

Variable wg is an object initialized before wg := &sync.WaitGroup() and this is shared to Repositories. One of the reason is to utilize this for disconnection, just in case it takes time. Inside the repo.Disconnect() we are actually simulating a lagging disconnect by putting 5 seconds sleep. At the end, we call wg.Done().

The object has to be shared to all packages that may require graceful shutdown. One of real case I had before was to share it to Pub/Sub processing. Getting a payload from google pub/sub subscription channel and processing it, may require time. We don’t want the service to be shutdown in the middle of process, and hanging out data integrity issue. For each payload to process, the service will call wg.Add(1), and everytime it finishes, successfully or error, it will call wg.Done(1). With the wg.Wait() waiting in main function, it will ensure all termination process to close before service exits.

How to Use This Tutorial as Bootstrap

If you happen to quickly build a backend-service with Go, either that for PoC, for tutorial, for project homework when you apply to a company, feel free to git clone this repository and start working on it as a scaffold. Quick tips:

  • Don’t delete any files yet, you can use them for reference and example
  • Start creating your model files. Figure out what tables/ collections you want to have.
  • Implement the repo file and write the unit tests.
  • Create interface and implementation for the Service/ business logic. Also unit tests.
  • Create proto file for twirp. Generate the go files.
  • Implement the twirp methods. No unit tests are necessary here.
  • Assing your twirp server into router.
  • Do integration test with postman.
  • If everything goes fine, remove the tutorial files you don’t need.

That’s a wrap! Thank you for reading and see you in other article.

🍻 Cheers!

--

--