James & Jenny, our toolbox for faster Drupal development
09 Nov
James & Jenny, our toolbox for faster Drupal development
Nick Vanpraet
Tech
Drupal 8
Be aware, this is a longread with extensive value. Only read this if you are ready to uncover our Dropsolid team's exciting dev tool and platform secrets!
Update - April 2018: Our platform has entered the next development phase on its roadmap, and we are proud to announce that James and Jenny have now been succesfully merged together onto the Dropsolid platform. The technical architecture, as explained below, is still highly relevant, but we no longer have to distinguish between the two core elements from a user perspective.
James & Jenny might sound more like a comedy double act or the protagonists of a long-forgotten tale, but they are in fact very much alive and kicking. They are the names we gave to the platforms that we developed in-house to spin up environments faster and get work done more efficiently. How? Read on!
In practice
Whenever we want to spin up a new server, start a new project or even create a new testing environment, we still rely on our infrastructure team. A while ago we managed to automate our build pipeline with some smart configuration of Jenkins, an open source piece of software. Combined with a permission system, we are already able to let technical clients or consultants participate in the development process of a site by triggering a build of an environment. We decided to call this home-coded piece of software James, our in-house Drupal Cloud Butler. However, this UI was very cluttered and it was easy to break the chain. Maintenance-wise, it wasn’t the friendliest system either. James 0.1 was very helpful, but needed polishing.
Behind the scenes we started building a proper platform that was designed to supersede this existing system and take over the creation of new servers, projects and environments by adding a layer on top of this - a layer that could talk to Jenkins and would be able to execute Ansible playbooks through a managed system via RabbitMQ. You could see this as James 0.2. This version of James only has one account and isn’t built with a great many permissions in mind. Its purpose is very simple: get stuff done. This means we still can’t let clients or internal staff create new environments on James directly or set up new projects. But we’d really like to.
This is why we’re currently also investing heavily in the further development of Jenny, the site-spinning machine. Jenny’s aim is to be a user-friendly layer on top of James and it consists of of two parts: a loosely decoupled Angular application consuming a Drupal 8 backend exposed through a REST API, which in turn talks to James through its REST API. Because Jenny makes sure only calls that are allowed go through to James, James can stay focused on functionality without having to add a ton of logic to make sure the request is valid. If the person who wants that new environment isn’t allowed to request one, Jenny won’t ask James to set it up in the first place.
How it works
A Jenny user will be able to create a new organization, and within that organization create new projects or clone existing ones. These projects can be housed on our servers or on external hosting (with or without VPN, Firewalls or anything else that’s required). They’ll be able to create new environments, archive entire projects or just a single environment, build, back up, restore, sync across environments, log in to an environment’s site, etc. It will even contain information about the health of the servers and also provide analytics about the sites themselves.
Now, because single-person organisations are rather non-existent, that user will be able to add other users to their organization and give them different permissions based on their actual role within the company. A marketeer doesn’t need to know the health of a feature-testing environment, and a developer has little use in seeing analytics about the live environment.
The goal of this permission system is to provide the client enough options that they can restrict a developer from archiving live but allow them to create a new testing environment and get all needed information and access for that environment. On a sidenote: these aren’t standard Drupal permissions, because those are for members within an organization, and a single user can be a part of many organizations and have different permissions for each one.
End-to-end
But all these layers have to be able to talk to each other before any of that can happen. JennyA(ngular) has to talk to JennyB(ackend), JennyB then has to make sure the request is valid and talk to James. And whatever information James returns, has to be checked by JennyB, stored in the database if needed, and then transformed into a message that JennyA can do something with.
To make sure we can actually pull this off, we created the following test case:How do we trigger a build of an environment in Jenkins from JennyA, and how do we show the build log from Jenkins in JennyA?
JennyA: build the page, get project and environment info from JennyB, create a button and send request to API. How this process happens exactly, will be explained in a different post.
JennyB
For this REST resource we need two entities: Project and Environment.
We create some new permissions (defined as options in an OrgRole entity) for our Environment entity type:
- Create environment
- Edit environment
- Delete environment
- Archive environment
- View environment
- View archived environment
- Build environment
Next to this, we build a custom EntityAccessControlHandler that checks these custom permissions. An AccessControlHandler must have two methods: checkAccess() and checkCreateAccess(). In both we want to make sure Drupal’s normal permissions (which for this entity we reduce to simply ‘manage project environment entities’) still rule supreme, so superadmins can debug everything. Which is why both access checks start with a normal, bog-standard $account->hasPermission() check.
if ($account->hasPermission('administer project environment entities')) { return AccessResult::allowed();}
But then we have to add some extra logic to make sure the user is allowed to do whatever it is they’re attempting to do. For that we grab that user’s currently active Membership. A Membership is a simple entity that combines a user, an organization, and an OrgRole entity which says what permissions the user has within that organization. For non-Create access we first check if this user is even a part of the same organization as the entity they’re trying to save.
// Get the organization for this project environment$organization = $entity->getProject()->getOrganization();// Check that the active membership and the attached organization match$accessResult = Membership::checkIfAccountIsPartOfCorrectOrganization($organization, $account);if ($accessResult->isForbidden()) { return $accessResult;}
UPDATE: it is important to add all cacheability metadata that you need to your AccessResults. If, like in our case, the result varies per user, their active membership and that membership's roles we have to add those as a dependency. Sometimes the result also depends on the environment, project and organization. When writing access checks, just remember to take a step back and think of any other entities or general contexts that influence the result of your access check. For example:
$result = AccessResult::allowedIf($condition) ->addCacheableDependency($user) ->addCacheableDependency($activeMembership)foreach ($activeMembership->getRoles() as $role) { $result->addCacheableDependency($role);}
For brevity’s sake, I won’t explain how exactly checkIfAccountIsPartOfCorrectOrganization does its checks. But it returns an AccessResultInterface object and does exactly what it says on the tin. It also includes a reason for forbidding access, so we can more easily debug problems. You can just add a string to the creation of an AccessResult or use $accessResult->setReason() and you can then grab it using $accessResult->getReason(). Take note: only forbidden and neutral implement that method. Make sure the result implements the AccessResultReasonInterface before calling either method.
if ($accessResult instanceof AccessResultReasonInterface) { $accessResult->getReason();}
We use this extensively with our unit testing, so we know exactly why something fails.
Assuming our test passes, we can finally check if this user has the correct permissions.
$entityOrganizationMembership = User::load($account->id())->getActiveMembership();switch ($operation) { case 'view': if (!$entity->isActive()) { return $this->allowedIf($entityOrganizationMembership->hasPermission('view archived project environment'), 'member does not have "view archived project environment" permission'); } return $this->allowedIf($entityOrganizationMembership->hasPermission('view project environment'), 'member does not have "view project environment" permission'); case 'update': case 'delete': case 'archive': case 'build': return $this->allowedIf($entityOrganizationMembership->hasPermission($operation . ' project environment'), 'member does not have "' . $operation . ' project environment" permission');}// Unknown operation, no opinion.return AccessResult::neutral('No operation matches found for operation: ' . $operation);
As you might have noticed, normally when you load a User you don’t get a getActiveMembership() method. But we extended the base Drupal User class and added it there. We also set that new class as the default class for the User entity, which is actually very easy:
function hook_entity_type_build(&$entity_types) { if (isset($entity_types['user'])) { $entity_types['user']->setClass('Drupal\my_module\Entity\User'); }}
Now loading a user returns an instance of our own class.
For createAccess() things get trickier, because at that point the entity doesn’t exist yet.This makes it impossible to check if it’s part of the correct organization (or in this case, the correct project, which is in turn part of an organization). So here we’ll have to also implement a field level Constraint on the related project field. This article explains how to create a field level Constraint.
In this Constraint we can do our Membership::checkIfAccountIsPartOfCorrectOrganization check and be sure nobody will be able to save an environment to a project for an organization they are not a part of, regardless if they are creating one or saving one (somehow having bypassed our access check). To make doubly sure, we also set the $validationRequired property on our Environment class to true. This way entities will always demand to be validated first. If they are not, or they have errors, an exception will be thrown.
Now we can finally build our rest resource. Since a Jenkins build doesn’t exist as a custom entity within JennyB (yet), we create a custom REST resource. We use Drupal console for this and set the canonical path to “/api/project_environment/{project_environment}/build/{id}” and the “create” path to “/api/project_environment/{project_environment}/build”. We then create another resource and set that one’s canonical to “/api/project_environment/{project_environment}/build”, the same as our first resource’s “create” path. This way, when you POST to that path you trigger a new build and when you GET you receive a list of all builds for that environment. We have to split this off into two resources, because each resource can only use each method once.
We generate these resources using Drupal console. But before we can begin with our logic proper, we have to make sure the ProjectEnvironment entity gets automatically loaded. For this we need to extend the routes method from the parent class.
public function routes() { $collection = parent::routes(); // add our paramconverter to all routes in the collection // if we could only add options to a few routes, we would have // to loop over $collection->all() and add them to specific ones. // Internally, that is exactly what the addOptions method does anyway $options['parameters']['project_environment'] = [ 'type' => 'entity:project_environment', 'converter' => 'paramconverter.entity' ]; $collection->addOptions($options); return $collection;}
In the routes method you can add or remove options and requirements to your heart’s content. Whatever you can normally do in a routes.yml file, you can also do here. We've explained this in more detail in this blog post.
Let’s take a closer look at our create path. First we’ll need to make sure the user is allowed to build. Luckily thanks to our custom access handler this is very easy.
// check if user can build$entity_access = $projectEnvironment->access('build', NULL, TRUE);if (!$entity_access->isAllowed()) { // if it’s not allowed, we know it’s a forbidden or neutral response which implements the Reason interface. throw new AccessDeniedHttpException($entity_access->getReason());}
Now we can ask James to trigger the build.
// Talk to James$data['key'] = self::VALIDATION_KEY;$url = self::API_URL . '/project/' . $projectEnvironment->getProject() ->getRemoteProjectID() . '/environment/' . $projectEnvironment->getRemoteEnvironmentID() . '/build';$response = $this->httpClient->request('POST', $url, array('json' => $data));$responseData = json_decode($response->getBody()->getContents(), TRUE);
For this test we use a simple key that James uses for authentication and build the URL in our REST resource. Eventually this part will be moved to a library and the code might look something like this:
$remoteProjectID = $projectEnvironment->getProject()->getRemoteProjectID();$remoteEnvironmentID = $projectEnvironment->getRemoteEnvironmentID();$response = $this->jamesConnection->triggerNewBuild($remoteProjectID, $remoteEnvironmentID, $data);$responseData = json_decode($response->getBody()->getContents(), TRUE);
We check the data we get back and if everything has gone well, we can update our local ProjectEnvironment entity with the new currently deployed branch.
if ($response->getStatusCode() == 200 && $data['key'] !== $projectEnvironment->getCurrentlyDeployedBranch()) { // Everything went fine, so also update the $projectEnvironment to reflect what // the currently deployed branch is $projectEnvironment->setCurrentlyDeployedBranch($data['branch']); // validate the entity $violations = $projectEnvironment->validate(); foreach ($violations as $violation) { $errors[] = $violation->getMessage(); } if (isset($errors)) { throw new BadRequestHttpException("Entity save validation errors: " . implode("\n", $errors)); } // save it $projectEnvironment->save();}
Running validate is necessary, because we set the $validationRequired property to TRUE for our entity type. If something goes wrong, including our custom Constraints, we throw a Bad Request exception and output the validation errors.
Then we simply return what James gave us.
return new ResourceResponse($responseData, $response->getStatusCode());
On James’ end, it’s mostly the same but instead of checking custom access handlers, we (for now) just validate the key. And James in turn calls Jenkins’ API. This will also change, and James will hand off the build trigger to RabbitMQ. But for the purpose of this test, we communicate with Jenkins directly.
James then returns the ID of the newly triggered build to JennyB, who returns it to JennyA. JennyA then uses that ID to call JennyB’s canonical Build route with the given ID until success or failure has occurred.
Curious to read more interesting Drupal-related tidbits? Check out the rest of our blog. Or simply stay up to date every three months and subscribe to our newsletter!