Table of Contents

Product Team Handbook

Welcome to Glints

Why this Guide

This handbook serves to guide new members on and remind old members of the Glints product team of the ins and outs of working in this team. Every high-performing team has a set of rules and rituals that it operates by, its Operating System. This guide simply makes those rules explicit and conscious -it's a manual to the OS.

All new members have to read this guide before joining the team. The intended outcome is that after going through the guide thoroughly, new members can smoothly onboard to the team with minimal in-person guidance. Old members are encouraged to refer to this guide as and when there are questions about how we work.

Lastly, this is a living document that adapts to learnings and improvements of the team as it grows in size and wisdom. Anyone in the Glints Product Team is free to contribute to it.

People

It is heedless and unwise to join a team without understanding its raison d'etre, its "Why". Otherwise, how do you know if your membership in this team is aligned with your own life purpose? And what will motivate you to contribute and push on when the going gets tough? I cannot over-emphasize the importance of knowing "Why" you're here on the Glints product team.

The following are our vision, or "What We Aspire To Be", our mission, "How To Achieve The Vision", and our purpose, "Why We Want To Achieve Our Vision". Every member should memorize and internalize these 3 statements verbatim.

Vision:

We are the #1 recruitment platform in Asia for companies to build successful teams with young talent.

Mission:

We help companies build successful teams by:

Purpose:

We help companies build successful teams so that they can achieve their own missions and higher productivity. This creates better opportunities for young talent to realize their potential.

Values:

Our values can be easily memorized by the nifty mnemonic: RIBCO. Just think ribs co.

ribs

R: Relentlessly resourceful

I: Integrity

B: Beginner's Mindset

C: Clarity of Thought

O: Ownership

For deeper reference: look at Why Glints.

Product

Before you touch any code, you need to understand the big picture of how everything comes together. We'll ease from the straightforward and visible frontend clients to the technical and infrastructural details.

Products and their Repositories

Glints Projects

Glints has 2 main front-end client repositories - candidate and employer. Candidate is our candidate-facing site, while employer is our employer-facing site. No surprises there. Then there's Dashboard, which is our deprecated employer dashboard, now reserved only for some of our older whitelabel clients. Other than that, the remaining front-end clients are our internal tools.

Front-end clients from the same ccTLD consume the same API service. So for example, employers.glints.id and glints.id both hit api.glints.id. Currently, there are 2 API services with identical codebases, 1 for Singaporean and 1 for Indonesian clients. These 2 API servers then read and write to 2 different databases, 1 for Singapore and 1 for Indonesia. In the near future, we will merge the 2 APIs and databases into 1, as we realised the benefits of a single database for markets expansion and learnt workarounds for certain past issues.

We employ the strategy of single-codebase-multiple-services in the running of API and API workers. Our API workers basically run on a separate service, taking on computationally heavier tasks in the background.

Redis is used for caching session data and enqueuing background tasks for the API workers to execute. Redis is an in-memory key-value store, and makes a suitable caching layer due to its quick response to queries and flexible data structure storage.

Like most apps, our database is only accessible via our API. We use PostgreSQL, hosted on AWS RDS, because our data is highly relational. PostgreSQL is not just relational, but object-relational, meaning that it supports complex structures and a breadth of built-in and user-defined data types. It provides extensive data capacity and is trusted for its data integrity. Moreover, it scales very well, and is known for its reliabilty and stability. Based on our experience thus far, it just works.

Lastly, we use elasticsearch to support fuzzy searches on our talent data. This is used as the underlying search technology behind Talent Hunt, a feature within Employer.

The remainder are internal tools, which will be briefly touched on below.

1. Candidate

Candidate site

Gitlab Repo: https://gitlab.glints.com/glints/glints-dst

Example URLs: https://glints.sg/, https://glints.id/, https://glints.com/, https://*.glints.sg/

Description:

As its name suggests, Candidate is the site that allows our candidates to build their profiles and apply for jobs.

Notice that it serves multiple URL's, with different countrycode top-level domains (ccTLD). Moreover, it also serves multiple subdomains. These subdomain sites are our whitelabel sites. These are sites licensed out to schools and organisations for them to handle hiring under their own branding and subdomain. An example is JOS, a multi-national firm that subscribed to our platform to manage their international hiring.

Some of the subdomain and ccTLD configurations are done through nginx, which can be found in the repository Dockerfiles, under the branches dst and dst-builder.

It's built with React, and styled with Semantic UI.

2. Employers

Employer site

Gitlab Repo: https://gitlab.glints.com/glints/glints-employers

Example URLs: https://employers.glints.sg/, https://employers.glints.id/, https://employers.glints.sg/gosea/

Description:

Employer, surprise, surprise, serves our employers. There are 2 groups of employers we serve -the free users, and the paid ones. The free users can post a job for free, and review their applicants. Paid users can do that, plus access Talent Hunt, our paid feature for companies to search for and receive recommendations of talent.

It's built with React, styled with Ant Design.

3. Dashboard

Temasek Polytechnic Dashboard

Gitlab Repo: https://gitlab.glints.com/glints/glints-dashboard

Example URLs: https://glints.sg/ngeeann, https://glints.sg/tp, https://glints.sg/jos

Description:

Dashboard used to serve both our candidates and employers, but now has been deprecated. It also has a copywriting portal, for our copywriters to review and edit any job on the platform. This functionality, too, has been migrated to our employers project. Currently, it only serves our older whitelabel sites and school portals, and is scheduled to eventually retire at the end of 2018.

4. API

Gitlab Repo: https://gitlab.glints.com/glints/glints-api

Example URLs: https://api.glints.com/api/features, https://api.glints.id/api/features

Description:

This is our monolithic API server, which is consumed by all our front-end clients. At this moment, it's using Koa as its middleware, and sequelize as the ORM. The endpoints are fashioned after the REST api standards, and are documented at https://docs.glintsintern.com/api/.

5. Static

Gitlab Repo: https://gitlab.glints.com/glints/static

Example URLs: https://glints.sg/schools/sp, https://glints.sg/schools

Description:

To allow our content team to quickly change content of certain static sites, we pushed out this project for easier deployment independent of the product team. These static pages are hosted on s3, at static.glints.sg or static.glints.id, and are proxied over in nginx.

6. i18n-editor

i18n Editor

Gitlab Repo: https://gitlab.glints.com/glints/glints-i18n-editor

URL: https://i18n-editor.glintsintern.com/

Description:

This is our editor for our content writers to translate the strings on our sites to various languages. Strings to be translated on our sites are annotated. Depending on the language preferences of the user, the corresponding json file is loaded from s3. What this editor does is simply load that json file of s3 into a table form, and then upload it back when the content writer saves it. This editor is being expanded to eventually allow our content writers to edit other pieces of content on our site.

The site is only accessible with a super admin account on glints.

7. Mission Control/ Metabase

Metabase Dashboard

Gitlab Repo: https://gitlab.glints.com/glints/glints-mission-control-2

URL: https://mission-control.glintsintern.com/main, https://metabase.glintsintern.com

Description:

The mission control is the original internal statistics dashboard that displays key metrics for our business. It pulls data right from our database and displays them in a dashboard. We've since moved over to Metabase, an open source tool which does all that, plus offers a query builder. Metabase is accessible by all glints emails.

8. LinkedIn CV Parser

Gitlab Repo: https://gitlab.glints.com/glints/glints-linkedin-cv-parser

URL: resume-parser.glints.com

Description:

This project parses linkedIn profiles exported as pdf into our database schema. It's directly interfaced by our API server.

Overall Architecture

Glints Infrastructural Architecture

The diagram above illustrates our infrastructural architecture. The diagram is pretty self-explanatory. We use rancher from rancher labs to manage our containers on feature, staging and production environments.

Rancher environment

The hierarchy of abstraction in Rancher is as such: environments -> stacks -> services -> containers. Within environments, stacks clusters together functionally cluster

API Code Architecture

Whilst our frontend clients have pretty standard react redux architecture, our backend architecture can trip up junior developers at times, due to its moderate degree of abstraction. Here's a skinny on the lay of the API land. It's recommended that you reference the api project while you read this.

API file directories

We use koa as our middleware framework, and sequelize as our ORM.

The starting file on the api is app.js. This is the main application. server.js wraps around it and exposes it to the world. In the start up routine in app.js, you can see that there are 3 types of initiation. The first is the initiation of services, followed by the middlewares and finally the routes. Services refer to any external integrations or services being used by the API. Examples include stripe for payment processing and mailgun for email delivery. The next is middlewares. Middlewares in our context are layers of functions that the requests goes through, adn they're used for calculating timeouts all the way to sanitizing the request body. Lastly, routes are the API endpoints that the server exposes. These are all documented inline in the actions files. Actions refer to the individual endpoints that are exposed, and are clustered by the resource they access.

A few other folders are left. config contains the configuration for different environments. default.yml forms the base layer, and individual variables are overriden first by environment-specific config files (eg. staging.yml when NODE_ENV is staging), and finally by environment variables. Environment variables have the final say, and on staging and production, that's configured on rancher. Rancher secrets is where we store the more sensitive information. Never ever commit a private key to git, as git never forgets.

lib contains our ActionRegistry, which as its name suggests, registers our actions with the api, and transform them from objects into usable endpoints. The others are controllers, which string the middlewares together and form a bridge to the models. You notice a variety of controllers. Base Controller is a basic version, whilst REST Resource Controller is for accessing a single unit of a resource. REST Collection Controller is for accessing multiple units of a resource, and REST Associated Collection Controller is for accessing resources joined by foreign keys. Finally, Search Controller is specifically used to interface with elasticsearch.

models is sequelize abstraction of tables. It correlates to our schema, with additional hooks included to interject before or after various database operations.

Last folder of note is tasks. These are background tasks delegated to the api workers. They're enqueued in resque by the main api instance, and are then popped out of the queue to be processed by the workers.

Some upcoming changes in our API is the introduction of Typescript and the migration to a service-based architecture.

Process

Getting Started

This section is intended for new members fresh to our stack. We assume that you're on a UNIX-based OS. Windows users, please figure out your corresponding commands, until we update this guide to serve your demographic.

1. Prerequisite Toolchain

Please ensure that you have the following tools:

Also ensure with your team lead that you've the following accounts created and/or linked to Glitns:

2. Local Repository

Once all is in place, you may proceed to recursively git clone our main glints project and its submodules at https://gitlab.glints.com/glints/glints. Simply follow the README instructions.

We use docker to standardize the development environment and ease setup for all developers. To further ease this process, we've developed a glints-cli, available at https://gitlab.glints.com/glints/glints-cli. Simply follow the README for setup and usage.

3. Localhost Startup

Once all is set up, you can, for a start, docker-compose up -d, or glints up -d, if using glints-cli. We'll use the docker-compose commands for universality, though glints is interchangeable if glints-cli is set up. Proceed to check the various projects at their ports stated when you run docker-compose ps.

4. Local Database Population

Once the projects are up and running, next step is to populate the data. This is needed before you can login and do anything useful on localhost. The way to do it is via glints backup first, then glints restore. The first command will copy the staging db into a pg_dump file, which can be optionally encrypted too. Simply follow the prompts. The second command will then upload it to the local postgres db. The staging postgres password can be obtained on rancher. It is in the environment variable of postgres in the staging environment, accessible when you upgrade the service.

For a comprehensive list of docker-compose commands, refer to Compose command-line reference.

Devops

Our CI/CD pipelines Our CI/CD pipelines in dockerfiles project

1. Continuous Integration and Deployment

We use gitlab ci/cd tool for our continuous integration and deployment. To read about the benefits and practice of CI/CD, click here. The unit tests are mostly run in the project-specific pipelines.

Once those tests pass, it'll trigger a corresponding branch pipeline in the dockerfiles project to generate the production build and deployment. The pipeline stages are configured in .gitlab-ci.yml file in every project.

2. Git Workflow

Our deployment branches

There are 3 main branches that we work out off -develop, staging and master. Whenever we are fixing a bug, we branch off from develop into a hotfix branch. Likewise, features are branched off into a feature branch. Long-running (more than 1 week) hotfix or feature branches should be rebased on top of develop at least once per week to avoid the pain of merge conflicts.

Once done, the feature or hotfix branch is then merged into develop, where tests and lints are run. When ready to be pushed to the staging environment, the develop branch is then merged into staging. Likewise for production environment, staging branch is merged into master. If tests pass, the build is automatically deployed. staging urls are easily constructed by prepending staging. to glints in production url. For example, staging url for employers.glints.com is employers.staging.glints.com. Last thing of note is our feature environments. To promote more timely QA, feature branches (of branch name prefixed with feature/) trigger the building of a new feature environment in rancher. Its corresponding url would be <feature_name>.glints.com. This can be used by vertical teams to test out WIP epics with stakeholders before they're ready to be staged.

3. Versioning

We abide by semver conventions, using mversion. During every feature or hotfix merge into develop, we first submit a merge request for a reviewer, who then merge it into develop upon review acceptance. Before merging from develop to staging, the developing performing the merge would first bump the version of the project accordingly based on semver, using mversion. Ensure that the newly created version git tag is pushed to remote, as the post-creation tag push sometimes fails for various reasons. This tag would then determine the docker image that's being used to create new docker containers on rancher.

Team Structure

We organize ourselves around small, cross-functional teams of around 9 people, tasked to achieve a short to mid-term goal. These teams consist of all the functions, skills and perspectives required to accomplish the goal, and thus frequently include members from beyond the product team. The team membership and structure is also fluid, meaning that it can change from time to time depending on the demands of the situation.

Rituals

1. Bi-weekly Sprints

We work in bi-weekly sprints. Each small team would hold a sprint planning session on the Monday at the start of every sprint, to decide on the features or fixes to release during the sprint. The following is the preparatory work to be done prior to the meeting, and the agenda of the meeting.

Sprint Planning Preparation

Everyone in the team:

Team Lead:

Sprint Planning Agenda

Celebration and Announcement (5 mins)

OKR review (5 mins)

Next Sprint Proposal (20 mins)

Outcome:

2. Kanban

We use kanban to manage the flow of tasks on our kanban board on Trello. There are a few tenets that we abide by:

  1. All tasks must be on the board. If it's not, create a card.
  2. Cards can only flow in one direction, from left to right. If a card needs to be reworked, just create a new one with the updated one, and archive the old one.
  3. A card can only move to the next stage if it's fulfilled the Done Rules, which is both populated in the card automatically, and also referenced in the Done Rules card at the top of the stage.
  4. There are Work-in-Progress limits on each stage. Focus on clearing the blocked stages first.

On top of the board, we hold a daily standup meeting, where everyone report on what they have done the day before, what they plan to do today, and the obstacles they face. The purpose is for the whole team to keep abreast of where everyone is at, coordinate dependencies and maintain a steady rhythm of progress. The tenets for effective standups are:

  1. Keep it succinct. Only share what the rest of the team needs to know.
  2. Always include trello card links when possible in your posted standup updates.
  3. Listen actively to everyone else. Catch out dependencies.

Performance Management

1. OKR

We employ the Objective Key Result framework for managing performance. Objectives are the high-level goals to be achieved, whilst the key results are the measurable yardsticks for determining objective fulfilment. In other words, if all the key results are met, the objective is necessarily achieved. OKRs cascade down from the company level, to department, team and individual level. OKR is what guides our sprint planning decisions.

2. 1-1

At least once per month, every member will have a 1-1 session with his/her direct manager. This a time for the member to voice any feedback or concerns he/she have with anything in the company. The direct manager will also provide feedback from their own and their peers' observations about the member's performance with regards to Glints's cultural values (RIBCO) and their skill sets. The outcome of every 1-1 is a set of actionables for both the member and the direct manager to work on until the next 1-1.

3. 360 Review

Minimally once per quarter, we'll hold a 360 review. This is where anyone that you work with, be it your direct managers, reports or peers will give you feedback about your performance and alignment with the cultural values.

Best Practices

This is a section that will be continually expanded as we develop best practices around certain processes.

1. Code Review Checklist

General

Security

Documentation

Testing

2. Communication Tips

As we're a distributed team, communication, both written and verbal, are of paramount importance. Here are some tips that we learnt the painful way (more specifically to Slack).

  1. Always err on the side of overcommunication.
  2. Use simple words. Be as specific as possible.
  3. Always respond, ideally in a thread, when you are personally tagged.
  4. If applicable, acknowledge your receipt of a channel announcement with a reaction.
  5. Default to public channels when creating new chat groups. It's better for channel discovery and transparency.