Lucius Digital: Update OpenLucius | Major facelift for this social productivity Drupal distro

Planet Drupal - Tue, 2020/11/24 - 10:05am
Last months we had more than 300 people testing the alpha-3 version of OpenLucius: a social productivity platform -build into a Drupal distro. We interviewed them and soon came to the conclusion that the base layouts needed big improvements. It was received as 'mhew..', we agreed. So we went to work and released alpha-4 today. We implemented a complete new base theme from scratch: clean, lean, fast and Bootstrap 4 based. Goal is to leave all the room for custom branding and other design needs.

Web Omelette: Ajax elements in Drupal form tables

Planet Drupal - Tue, 2020/11/24 - 9:20am

Maybe you have banged your head against the wall trying to figure out why if you add an Ajax button (or any other element) inside a table, it just doesn’t work. I have.

I was building a complex form that needed to render some table rows, nicely formatted and have some operations buttons to the right to edit/delete the rows. All this via Ajax. You know when you estimate things and you go like: yeah, simple form, we render table, add buttons, Ajax, replace with text fields, Save, done. Right? Wrong. You render the table, put the Ajax buttons in the last column and BAM! Hours later, you wanna punch someone. When Drupal renders tables, it doesn’t process the #ajax definition if you pass an element in the column data key.

Well, here’s a neat little trick to help you out in this case: #pre_render.

What we can do is add our buttons outside the table and use a #pre_render callback to move the buttons back into the table where we want them. Because by that time, the form is processed and Drupal doesn’t really care where the buttons are. As long as everything else is correct as well.

So here’s what a very basic buildForm() method can look like. Remember, it doesn’t do anything just ensures we can get our Ajax callback triggered.

/** * {@inheritdoc} */ public function buildForm(array $form, FormStateInterface $form_state) { $form['#id'] = $form['#id'] ?? Html::getId('test'); $rows = []; $row = [ $this->t('Row label'), [] ]; $rows[] = $row; $form['buttons'] = [ [ '#type' => 'button', '#value' => $this->t('Edit'), '#submit' => [ [$this, 'editButtonSubmit'], ], '#executes_submit_callback' => TRUE, // Hardcoding for now as we have only one row. '#edit' => 0, '#ajax' => [ 'callback' => [$this, 'ajaxCallback'], 'wrapper' => $form['#id'], ] ], ]; $form['table'] = [ '#type' => 'table', '#rows' => $rows, '#header' => [$this->t('Title'), $this->t('Operations')], ]; $form['#pre_render'] = [ [$this, 'preRenderForm'], ]; return $form; }

First, we ensure we have an ID on our form so we have something to replace via Ajax. Then we create a row with two columns: a simple text and an empty column (where the button should go, in fact).

Outside the form, we create a series of buttons (1 in this case), matching literally the rows in the table. So here I hardcode the crap out of things but you’d probably loop the same loop as for generating the rows. On top of the regular Ajax shizzle, we also add a submit callback just so we can properly capture which button gets pressed. This is so that on form rebuild, we can do something with it (up to you to do that).

Finally, we have the table element and a general form pre_render callback defined.

And here are the two referenced callback methods:

/** * {@inheritdoc} */ public function editButtonSubmit(array &$form, FormStateInterface $form_state) { $element = $form_state->getTriggeringElement(); $form_state->set('edit', $element['#edit']); $form_state->setRebuild(); } /** * Prerender callback for the form. * * Moves the buttons into the table. * * @param array $form * The form. * * @return array * The form. */ public function preRenderForm(array $form) { foreach (Element::children($form['buttons']) as $child) { // The 1 is the cell number where we insert the button. $form['table']['#rows'][$child][1] = [ 'data' => $form['buttons'][$child] ]; unset($form['buttons'][$child]); } return $form; }

First we have the submit callback which stores information about the button that was pressed, as well as rebuilds the form. This allows us to manipulate the form however we want in the rebuild. And second, we have a very simple loop of the declared buttons which we move into the table. And that’s it.

Of course, our form should implement Drupal\Core\Security\TrustedCallbackInterface and its method trustedCallbacks() so Drupal knows our pre_render callback is secure:

/** * {@inheritdoc} */ public static function trustedCallbacks() { return ['preRenderForm']; }

And that’s pretty much it. Now the Edit button will trigger the Ajax, rebuild the form and you are able to repurpose the row to show something else: perhaps a textfield to change the hardcoded label we did? Up to you.

Hope this helps.


Third & Grove: Best of Acquia Engage 2020

Planet Drupal - Tue, 2020/11/24 - 7:35am

Acquia Engage is over, but its great lessons and content live on. You can access all of the Engage session recordings on the event site. Here are our top six can’t-miss sessions to catch up on if you couldn’t make it to Engage.


Mateu Aguiló: How to use HTTPS in your local environment

Planet Drupal - Tue, 2020/11/24 - 1:00am
I have several local development environments in my machine. I would like to use HTTPS on them without much hustle. That is why I decided to create my own custom Certificate Authority to simplify the process.

Mateu Aguiló: How to use HTTPS in your local environment

Planet Drupal - Tue, 2020/11/24 - 1:00am
I have several local development environments in my machine. I would like to use HTTPS on them without much hustle. That is why I decided to create my own custom Certificate Authority to simplify the process.

Gábor Hojtsy: Make your project Drupal 9 compatible this week for a chance at one of two free DrupalCon Europe tickets!

Planet Drupal - Mon, 2020/11/23 - 6:15pm

Do you own an existing project that does not yet have a Drupal 9 compatible release? This week would be a good time to take that step and make a Drupal 9 compatible release! I am paying for two tickets to DrupalCon Europe for new Drupal 9 compatible releases. Read on for exact rules!


Evolving Web: Migrating Content Translated with Entity Translation to Drupal 8/9 (Updated 2020)

Planet Drupal - Mon, 2020/11/23 - 3:30pm

In Drupal 8 there is only one unified way of translating content to different languages. However, in Drupal 7, there were two different ways to do it:

  • Using core's Content Translation module: this will create separate nodes per language.
  • Using the contributed Entity Translation module: this maintains a single node and the translation happens at a field level.

Entity Translation is what is closer to Drupal 8's implementation of translation. Evolving Web has written blog posts about both methods in the past: content translation and entity translation.

However, that was a while ago, and there are updated ways to do proceed with entity translation migration. We'll go over those updated methods in this article.

Before We Start The Problem

Our imaginary client has provided a database dump for their Drupal 7 site containing some nodes. These nodes might have translations in English, Spanish and French, but there could also be some non-translatable nodes. We need to migrate those nodes to Drupal 8 while preserving the translations.

Setting up the Migration Create the migration module

We need to create a module for our migrations. In this example, we're naming it migrate_example_entity_translation. 

We then need to add the following modules as dependencies in the module declaration:

Create a migration group

To group the migrations, we also need to create a migration group. To do so, we’ll create a fairly simple configuration file so that the group gets created when the module is installed. The file’s contents should be as follows:

id: entity_translation label: Entity Translation Group source_type: Drupal 7 shared_configuration: source: key: migrate_d7Define a new database connection

Next, you need to load the Drupal 7 database into your Drupal 8 installation. To do so, you need to define a new database connection in your settings.php file like this:

$databases['migrate_d7']['default'] = array( 'driver' => 'mysql', 'database' => 'migrate_d7', 'username' => 'user', 'password' => 'password', 'host' => 'db', 'prefix' => '', );

And then you can import the database dump into this new database connection using your preferred method.

Writing the Migrations

Next thing to do is to write the actual migrations. Per our requirements, we need to write two different migrations: one for the base nodes and one for the translations.

Since Drupal 8.1.x, migrations are plugins that should be stored in a migrations folder inside any module. You can still make them configuration entities as part of the migrate_plus module but I personally prefer to follow the core recommendation because it's easier to develop (you can make an edit and just rebuild cache to update it).

Write the base nodes migration

The first migration to write is the base nodes migration. This will be just a simple migration without anything special related to entity translation.

The full migration file will look like this:

id: example_creature_base label: Creature base data migration_group: entity_translation migration_tags: - node - Drupal 7 source: plugin: d7_node node_type: article destination: plugin: entity:node process: type: plugin: default_value default_value: article title: title status: status langcode: language created: created changed: changed promote: promote sticky: sticky revision_log: log field_one_liner: field_one_liner body/value: body/value body/format: plugin: default_value default_value: full_htmlWrite the translations migration

Now we should write the translations migration. We start by creating the migration file. In this case, we'll name it example_creature_translations.yml. The source section of this migration will look like this:

source: plugin: d7_node_entity_translation node_type: article

In the plugin, we're using d7_node_entity_translation; this is a plugin already included in core to handle this type of migration.

The destination for this plugin will be pretty similar to the destination for the base migration. It will look like this:

destination: plugin: entity:node translations: true destination_module: content_translation

Now it's time to write the process section for this migration. It will be pretty similar to the base migration. You only need to keep in mind that you need to migrate two new properties: content_translation_source and content_translation_outdated. So, your process section will look like this:

process: nid: plugin: migration_lookup migration: example_creature_base source: entity_id type: plugin: default_value default_value: article title: title status: status langcode: language created: created changed: changed promote: promote sticky: sticky revision_log: log field_one_liner: field_one_liner body/value: body/value body/format: plugin: default_value default_value: full_html content_translation_source: source content_translation_outdated: translate

Finally you can setup migration dependencies to ensure your migrations run in the right order:

migration_dependencies: required: - example_creature_base

You can look at the full migration file in the code samples repo.

Running the Migrations

Since we have set dependencies, we can instruct Drupal to run the migration group and it will run the migrations in the right order.

To do so, execute drush mim --group=entity_translation and the output will look like this:

[notice] Processed 9 items (9 created, 0 updated, 0 failed, 0 ignored) - done with 'example_creature_base' [notice] Processed 9 items (9 created, 0 updated, 0 failed, 0 ignored) - done with 'example_creature_translations'

You can also run drush ms to see current migration status:

--------------------- ----------------------------------- -------- ------- ---------- ------------- --------------------- Group Migration ID Status Total Imported Unprocessed Last Imported --------------------- ----------------------------------- -------- ------- ---------- ------------- --------------------- Entity Translation example_creature_base Idle 9 9 0 2020-11-09 15:54:06 Entity Translation example_creature_tran Idle 9 9 0 2020-11-09 15:54:07Next Steps + more awesome articles by Evolving Web

OpenSense Labs: REST APIs in Drupal

Planet Drupal - Mon, 2020/11/23 - 10:32am
REST APIs in Drupal Gurpreet Kaur Mon, 11/23/2020 - 15:02

You must be familiar with words like intermediary, middleman and mediator. What do these words mean? Could they possibly denote a job profile? I think they can. An intermediary, a middleman and a mediator, all constitute a connection between two parties, they provide a line of communication that makes one access the other with ease; like a store manager, he allows the consumer to access the products of the manufacturer. Without him, sales would be pretty difficult to do. 

Bringing the conversation back to the topic at hand, an API is essentially an intermediary, a middleman or a mediator. The Application Programming Interface provides access to your users and clients to the information they are seeking from you. The information provider uses an API to hand the information over to the information user through a set of definitions and protocols. 

In decoupled Drupal, the API layer provides the connection between the separated front-end and back-end layers. Without it, decoupling would not be possible, since there won’t be a way to transmit content from the backend to the presentation layer. 

There are quite a few APIs that perform impressive functions, but today we would only be enunciating REST API. So, let’s delve right in.

What makes REST API important?

REST API, RESTful API or Representational State Transfer is built on the constraints of the REST architecture. It is renowned to make development easy by supporting HTTP methods, handling errors along with other RESTful conventions. Since REST capitalises on HTTP, there isn’t a need for installing a separate library or software to capitalise REST’s design, making development all the more easy.

A representation of the state of the resource is transferred to the requestor, when REST API is used to make a request. It could be done in numerous formats, namely JSON, HTML, XLT or your plain old text, through HTTP. 

If you asked me what the best things about REST is, I would have to say it is its flexibility. The REST API is designed to be flexible because it is not tied to certain methods and resources. Therefore, it can handle multiple types of calls, transform structurally and return data formats as well. Such versatility makes REST competent to provide for all the diverse needs your consumers may have. 

REST cannot be defined as a protocol or a standard, rather a set of architectural principles would be a more accurate description. These principles make an API become a RESTful API. So, let us understand these constraints to understand REST API better. 

The Segregated Client and Server 

This principle states that the client and the server are to be independent of each other leading to more opportunities for growth and efficiency. Since the separation of concerns would allow a mobile app to make changes without those changes affecting the server and vice-versa, the organisation would grow far more quickly and efficiently.

The Independent Calls

A call made using REST API is just that, one call; it has all the potential data for completing a task in and by itself. If a REST API has to be dependent on the data stored on a server for each individual call, it would not be very effective in its job. It has all the necessary data with itself, making REST API very reliable. This principle is known as the state of being stateless.

The Cacheable Data 

Now you would think that since REST API stores such massive amounts of data in itself, it would increase your overheads. However, this is a misconception. REST was built to work with cache, meaning it can store cacheable data. This ability helps in reducing the number of API interactions drastically leading to reduced server usage and consequently faster apps. 

The Appropriate Interface 

Decoupling mandates the implementation of an interface that is not tightly connected to the API providing uniformity to application development. This can be achieved by using HTTP along with URI resources, CRUD and JSON.

The Layered System 

REST API works with a layered system; what this means is that each server, be it security or load-balancing, is set into a hierarchy. This constraints the component behaviour so that one cannot see beyond its layer. 

The New Code 

Finally, there is the Code on Demand, this principle that gives you the option to transmit code or applets through the API layer and this code or applet would actually be used in the application. This principle allows you to build applications that do not just rely on their own code. However, the security concerns have made it the least used of them all.

All of these are essentially the guiding principles of REST API, along with that they also lay emphasis on the work REST can do for you and your application; this, highlighting its importance.

Exploring REST API in Drupal

Now that you know the importance and principles of REST API, it is time to move one to its exploration. REST API can be explored in Drupal through a number of modules, all you have to know is where to look and what to exactly look for. For the same reason, here is the list that would make consuming REST API seem like a walk in the park.  

Drupal Core Modules

There are certain REST modules that are so popular that they have become a part of Drupal core. These are;

RESTful web services

RESTful Web Services is a module that takes advantage of Entity API to provide you the information of all entity types, be it nodes, comments, taxonomy terms or your users. Being built over the Serialization module, it gives you customisation and extension of the RESTful API. It also has the ability to expose additional resources along with adding authentication mechanisms, which can be applied to any of the resources. 

Serialization and Serialization API

The primary purpose of the serialization module is to de-serialize data to and from formats such as JSON and XML. You can simply call it a service provider for the same. 

The Serialization API is based on the Symfony Serializer Component. Its numerous features have made it quite advantageous to the users. 

  • For one, it can serialize and deserialize data;
  • It helps in encoding and decoding to and from new serialization formats respectively, you can read data and also write it;
  • It can also normalize and denormalize data and set it into a new normalization format. 


HAL is an acronym for Hypertext Application Language. This module uses its namesake to serialise entities. With features similar to the Serialization module, it is often regarded as an extension of the same. The HAL hypermedia format has the potential of being encoded in JSON as well as XML. Being a part of Drupal Core, it is the most sought after format.

This is a module that lets you test drive as well. Yes, once it is installed and configured, you can test drive your site through the HAL browser by simply providing JSON data.

HTTP Basic Authentication 

You must be familiar with the term authentication, the working of HTTP Basic Auth is similar to that. What it does is takes a request, identifies the username and the password of the user and authenticates them against Drupal. It does so by implementing the HTTP Basic protocol, which essentially encodes the username and the password and adds the same in an Authorization header; and all of this done within a request. 

It is to be noted that this module does not use an interface, it acts as a support for Drupal’s Authentication Manager. 

The Alternates of Basic Authentication 

Basic Auth is an important module in the REST API, therefore, certain alternatives are also available to be used in its place. 

OAuth 1.0

The OAuth 1.0 standards are implemented by OAuth 1.0 module to be used in Drupal. It provides a foundation to the modules that want to use the OAuth. 

In Drupal 8, this module achieves the feat of leveraging the OAuth PECL Extension, leading to the implementation of the authentication provider

Simple OAuth(Oauth2) & OpenID Connect

Simple OAuth can be described as the implementation of OAuth 2.0 Authorization Framework RFC. In Drupal, it is a module that makes use of the PHP library OAuth 2.0 Server, which is a part of The League of Extraordinary Packages. Let me tell you something about this library so you know how valuable it is, it has actually become a standard for the modern PHP. With it being thoroughly tested, you can’t go wrong; still you would need to check your options at the time of deciding a project to use. 

Coming to OpenID Connect, it comes along with OAuth 2.0, being an identity layer on top of its protocol. It helps you verify the identity of the end users along with fetching their basic profile information.


The name OAuth2 JWT SSO does clear up notions of what it actually does, all three acronyms are at work. It can work with Drupal's very own OAuth 2.0. The reason being its ability to configure Drupal so that both centralized and remote authentication services can be used. 

Like its name suggests, it also works with JWT and SSO, which is short for Single Sign On. It can capitalise on any SSO, provided that it uses OAuth2 as its authentication framework, and JWT as its Bearer token. 

Cookie Based Authentication

If you have ever used a website, you would then know what a cookie actually is. Was it just today when you declined that ‘accept cookies’ request? These help a website to recognise users so that they do not have to log in again. 

Now, web applications tend to use cookie-based authentication, which they implement differently. However, at the end of each day, they will have some cookies set up that would represent an authenticated user. A cookie is transmitted along with every request and the session is deserialized from a store. 


More than 20,000  sites have been reported to use this very module. It is known to be fully feature-packed, its maintainers have the same thoughts. 

Coming to its abilities, REST UI provides an interface to configure Drupal 8’s REST module. Due to its handy configuration, you won’t find a need to play with Drupal’s configuration import page. This fact not only benefits the novice Drupal users, but also expedites your configuration by a substantial time margin. You can simply install it by using the default approach, Drush or the Drupal Console. 


REST API is pretty versatile in its features and Drupal has all the necessary modules to consume it in an optimised manner. If you had to choose a thread to hold your front and backend together, I would say that REST API would not let you down. However, that would only be possible, if you know how to capitalise it using Drupal. I hope I would have enlightened you about the same through this blog. To learn about other web services available in Drupal in addition to REST, read about GraphQL and JSON:API.

blog banner blog image REST API JSON API GraphQL Decoupled Drupal Drupal RESTful Web Services Blog Type Articles Is it a good read ? On

Annertech: How Annertech is Contributing to DrupalCon Europe 2020

Planet Drupal - Mon, 2020/11/23 - 9:41am

Every year, Annertech is actively involved with DrupalCon, and this year is no different. Here are some of the ways we are contributing this year.


Promet Source: Texas Sees Sharp Rise in Rate of ADA Lawsuits

Planet Drupal - Sun, 2020/11/22 - 8:09pm
As ADA Accessibility standards adjust to the digital age, websites and all digital properties need to adhere to the latest accessibility compliance standards.  Legal action in Texas pertaining to ADA web accessibility non compliance has been on a particularly sharp trajectory. In 2019, Texas saw 239 ADA Title III lawsuits, up from 196 in 2018. This represents a 21.9 percent increase in Texas, and many other states have seen similar or even greater increases in their case volume.

Bloomidea's Blog: Test your Drupal emails with Lando, MailHog, and Swiftmailer

Planet Drupal - Sat, 2020/11/21 - 4:25pm

If you haven't heard of MailHog, it is an email testing tool for developers. Lando has native support for MailHog, you just need to add it as a service to your .lando.yml file. You can just copy the example I leave below, which also has some cool extras, like enabling and disabling Xdebug on the fly. There's a second file .lando.php.ini that .lando.yml references with some custom settings, like changing the default xdebug.remote_port to 9001 (I need that on my Mac).

On the Drupal side, you'll obviously need to enable the Swift Mailer module. If you haven't done it already, you should enable your settings.local.php file so you can have your local development overrides.

.lando.yml name: drupal recipe: drupal8 config: webroot: web xdebug: false database: mysql:8.0 config: php: .lando.php.ini services: phpmyadmin: type: phpmyadmin hosts: - database appserver: overrides: ports: - "" environment: PHP_SENDMAIL_PATH: '/usr/sbin/sendmail -S mailhog:1025' mailhog: type: mailhog portforward: true hogfrom: - appserver proxy: mailhog: - tooling: xdebug-on: service: appserver description: Enable xdebug for apache. cmd: "docker-php-ext-enable xdebug && /etc/init.d/apache2 reload" user: root xdebug-off: service: appserver description: Disable xdebug for apache. cmd: "rm /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && /etc/init.d/apache2 reload" user: root .lando.php.ini ; Xdebug xdebug.max_nesting_level = 256 xdebug.show_exception_trace = 0 xdebug.collect_params = 0 xdebug.remote_enable = 1 xdebug.remote_autostart = 1 xdebug.remote_host = ${LANDO_HOST_IP} xdebug.remote_port=9001 settings.local.php // Swiftmailer MailHog settings override $config['swiftmailer.transport']['transport'] = 'smtp'; $config['swiftmailer.transport']['smtp_host'] = 'mailhog'; $config['swiftmailer.transport']['smtp_port'] = '1025'; $config['swiftmailer.transport']['smtp_encryption'] = '0';


By José Fernandes


Amazee Labs: A Look at the Underlying Automation of our Managed Web Maintenance Services - Pt. 2

Planet Drupal - Fri, 2020/11/20 - 11:52am
<img src=";itok=omrvmCBM" width="1120" height="630" alt="Amazee Labs - A Look at the Underlying Automation of our Managed Web Maintenance Services - Part 2" title="A Look at the Underlying Automation of our Managed Web Maintenance Services" class="image-style-leading-image" /> Part 2 - Web Maintenance & Automation

OpenSense Labs: Why Is GraphQL an Important Player in Decoupled Drupal?

Planet Drupal - Fri, 2020/11/20 - 6:49am
Why Is GraphQL an Important Player in Decoupled Drupal? Gurpreet Kaur Fri, 11/20/2020 - 11:19

Drupal is a renowned name when it comes to website development. The kind of features and control it allows you to have on your content is quite impressive. The traditional Drupal architecture has repeatedly proven to be valuable and now it is the time for Decoupled Drupal architecture to do the same, which it is on the path of doing. A major reason for the increasing popularity and adoption of the decoupled Drupal is the freedom to make the front-end using the technologies you like.

Since the Decoupled Drupal architecture separates the presentation layer from the backend content, an API becomes the thread that holds the two together to work in sync. So, it is important to choose the right one for your project. While REST API and JSON: API are quite sought after, GraphQL has also emerged as a front runner. So, let us find out what exactly GraphQL is, what it can do and how it plays in the Decoupled Drupal architecture’s picture.

Decoding GraphQL 

GraphQL, a query language, came into being about eight years ago, however, its popularity came through in 2016, when it was made open-source. Its founder, Facebook, created a unique API because they needed a program to fetch data from the entirety of its content repository. It is a system that is easy to learn and equally easy to implement, regardless of the massive proportions of data you may have or the massive number of sources you may want to go through. 

When I said that GraphQL is a query language, I meant just that. It is a language that will answer all your queries for data using its schema, which can easily be deployed using GraphiQL, an Integrated Development Environment. GraphQL, being a language specific for APIs, is equipped to manipulate data as well with the help of a versatile syntax. The GraphQL syntax was created in a way that it is able to outline the requirements and interactions of data to your particular needs. The shining glory of GraphQL is that you only get what you have asked for, nothing more, nothing less. This means that the application will only work towards retrieving data that is of the essence in the request. The data might have to be loaded from varying sources, but it will still be accurate and precise, succinct to the T and exactly what you sought.

With decoupling becoming a need now more than ever and content and presentation layers segregating from each other, GraphQL becomes the answer for all data queries, essentially becoming the answer for data retrieval for your API and your web application. A query language and a runtime to fulfil every query within your existing data, GraphQL is powered to paint a complete and totally understandable description of your data using the API. Its robust developer tools have proven to make APIs faster, more flexible and extremely friendly to our developer friends. Therefore, achieving decoupling becomes extremely easy as GraphQL provides typed, implementation agnostic contracts amongst systems. 


The power and assistance of APIs have become all the more conspicuous in the decoupled world. With three prominent names taking the lead here, it becomes somewhat tricky to get to the right API conclusion. GraphQL, JSON:API and REST API, all three have their own virtues making them great at whatever they intend to do. However, it becomes almost impossible to talk about one and ignore the other two. My writing would not have been complete without a comparison of the three.

If you look at the core functionality of these three APIs, you will realise that GraphQL and JSON: API are much more similar to each other than REST API, which is a whole other ball game compared to them. Let us look at them.

Parameters GraphQL JSON: API REST API How much data is  retrieved?  Does not over-fetch data  Does not over-fetch data  Inundates the user with unnecessary proportions data  How is the API explored? Has best API exploration due to GraphiQL Uses a browser to explore the API  Relatively does not perform well and the navigable links are rarely available How is the schema documentation? Perfect auto-generated documentation and reliable schema Depends on OpenAPI standard and JSON:API specification only defines generic schema Depends on OpenAPI standard How do the write operations work? Write operations are tricky Write operations come with complete solutions Writes can become tedious with multiple implementations How streamlined it can be during installation and configuration? Provides numerous non-implementation specific developer tools but is low on scalability and security Provides numerous non-implementation specific developer tools and is high on scalability and security Provides numerous tools, but they require specific implementations and come with good scalability and high security GraphQL

With GraphQL and its distinct fields of query, the developer is asked to specify each and every desired resource in these fields. You might be wondering why? The answer lies in its exactness. It is because of these explicitly mentioned desires that GraphQL never over-fetches data.

Coming to API exploration, GraphQL takes the cake for being the simplest and most conclusive. The fact that a GraphQL query comes with suggestions that can be auto-completed justifies my earlier claim. Moreover, the results are shown alongside the query resulting in smooth feedback. Its in-house IDE, GraphiQL, also helps in generating iterations of the queries, aiding the developers even more. 

If you told me that GraphQL specifications are more equipped at handling reads than writes, I would not object to you. Its mutations require you to create a new custom code every time. You can probably tell how much of a hassle it can be. Regardless, GraphQL can easily support bulk write operations that have already been implemented.

In terms of scalability, GraphQL requires additional tools to capitalise on its full potential and there are certainly numerous developer tools made available, and all of them are not implementation-specific.


The problem of over-fetching is not witnessed with JSON:API as well. Its sparse fieldsets produce an output similar to GraphQL. However, unlike GraphQL’s uncacheable requests, JSON: API can omit the sparse fieldsets when they become too long and hence, can cache even the longest requests. 

With JSON: API, the explorations are simple as well; as simple as browsing within a web browser, which is basically what the API acts like here. From scouring through different resources to different fields and debugging, you can do all of that with your browser. Can data retrieval be simpler?

In terms of writings, JSON: API is probably the best out of the bunch. It offers a wholesome solution for handling writes by using POST and PATCH requests. Even though bulk support is not available at the moment, it is in the works and would soon be in use.

JSON: API is again similar to GraphQL as it also provides various developer tools that are not implementation-specific. The fact that its infrastructure resembles that of a website, requiring Varnish and CDN, makes it different from the former. 


REST API has probably the most over-fetching system going on. Not only does it mandate multiple requests for one piece of content, it would also give you responses that are often even beyond the threshold of verbose. And the data that you end up with is so much more that you asked and needed. 

Again, REST API is completely different from the other two in terms of data exploration. And sadly, this isn’t a good kind of different. REST API is highly dependent on an OpenAPI standard, and if you are not adhering to that, things would seem a tad bleak for you. You will not be able to trust it for an auto-generated documentation or a validatable and programmable schema. The navigation through high volumes of data seeking interactivity is not too impressive either. 

Writing data in REST API is quite easy, almost as easy as reading it. Using POST and PATCH requests, every implementation is unique. Bulk support is not on the table. 

REST API’s infrastructural needs also resemble that of an ordinary website encompassing Varnish or CDN. However, its additional tools, although many, mandate customisation before implementation.

The GraphQL and Decoupled Drupal Dynamic 

GraphQL is a language that is undergoing developments as I am writing and you will be reading, but that does not mean that it is unstable. Moreover, GraphQL is being capitalised by several Drupal websites. The exactness in responses along with the always available introspection layer makes GraphQL truly worth it.

Let us now understand how it is the perfect answer to Decoupled Drupal by asking all the right questions or shall I say queries?

How does Drupal fully capitalise GraphQL?

Drupal can be rigid, it is a fact all of us know. Apart from this, Drupal can also seem to be too much with everything it has to offer. What GraphQL does is, it gives you the ability and the power to create and expose a custom schema that would eventually become the only roadway to all the data; information, operations and interactions, whatever happens within the system. And then, Drupal does not seem to be too rigid. 

You get a GraphQL module in Drupal  which is designed around webonyx or graphql-php. What this means is that the module is basically as jam-packed with features as the actual language is with all the GraphQL specifications.

  • The module can be used as the basis for creating your very own schema by generating a custom code;
  • The module can also be used to extend the already existing schema with the use of the plugin architecture, wherein the plugin would act as the sub-module.
  • To aid development even more, GraphiQL is also included at /graphql/explorer, which acts as a user interface for you.
  • Lastly, there are built-in debugging tools that are competent to issue queries and analyse their responses and that too in real time.

GraphQL is a powerful tool and Drupal has ensured that all its community can easily tap into its power.

The GraphQL Twig module is the next advancement in Drupal. It was and generally is thought that GraphQL queries can only be sent over HTTP, but that isn't true. It can be, but there are other ways as well and this module personifies that. You can segregate the Twig templates from the internal structures of Drupal, so that maintenance and reuse is easier without any involvement of HTTP.

Should we use GraphQL or JSON:API or REST in Drupal?

Before getting into why GraphQL, we have to understand why not REST and what are its limitations. First of all, REST UI is absolutely important to set up the REST module in Drupal. Not to forget, it can be pretty arduous to configure it. In addition, the primary problem with REST is that it over fetches information, bombarding you with data you do not even need and certainly did not ask for. You might have just needed the title of an article, but the author’s user id is also included in the response list. This leads to a cycle of follow-up queries and you end up with the article’s title, link, its author’s name, his information and the entire content of the said article. Over-fetching is putting it lightly.

Because GraphQL uses the APIs in a more simplistic way, it becomes better than REST endpoints. The former does not expose single resources with fixed data structures and links between them, rather it provides you the opportunity to request any selection of data that you may need. You can easily query multiple resources on the server side simultaneously, consequently combining the different pieces of data in one single query. Hence, your work as a front-end developer becomes as easy as pie. You could still go for REST, if you wanted, it does have its own set of merits

Now coming to choosing between JSON: API and GraphQL, this is a more difficult choice to make. These two perform at a level parallel to each other. For instance, installing the JSON: API module is a piece of cake with absolutely no configuration required. As for GraphQL, the installation is easy as well, but there is a need for some configuration. Do you see why I said the choice was difficult?

Where decoupling is concerned, JSON: API and GraphQL are much better than REST. Server-side configuration is not required by the clients to perform content queries. While JSON: API has the default setting of altering every client-generated query, GraphQL mandates the permissions to be held by the consumer so that he can forego any access restrictions. There is no right or wrong method here, both have the right filtering tools for decoupling and both are security for your content.

When is the GraphQL module the most suitable?

Only fetching the data that is asked for should be the tagline of GraphQL and that is why in scenarios where you need data retrieval, it becomes quite handy.

  • Decoupled Drupal applications, with Drupal as the content repository and a React, Angular or Ember powered front-end.
  • Mobile applications, wherein data storage is the need of the hour.
  • Internet of Things data storage.
  • If you want to retrieve the JSON data from Drupal.
  • And also if you plan to use the Twig Templates in Drupal themes.

The GraphQL module would be perfect in all of these instances.

Why do GraphQL and Drupal fit so well?

GraphQL is considered to be an excellent fit for Decoupled Drupal websites, especially if they comprise entities as fields for stored data and these fields have relationships with other entities. GraphQL helps you curate queries for just the fields you need from the Article. This kind of flexibility makes it easy to restructure an object you wanted back, helping you to change the display as well. You can write queries, add or remove fields from the results and you can do all of this without writing a code on the backend.  The GraphQL module’s ability to expose every entity including pages, users and customer data makes all of this seem quite simple. So, it would be suffice to say that the decoupling experience would not be the same without GraphQL.

Has GraphQL impacted the conventional server-client relationship?

Traditionally, the server was the dominant in the server-client relationship, however, now the client holds more power and GraphQL has made certain of this. With it, the client need not follow everything the server imposes, it can simply enunciate its needs on a per-request basis. After the server would show the data possibilities it is equipped to fulfil, it would be extremely easy for the client to define its needs with the catalogued possibilities at the forefront. The shape of the values GraphQL API inserts and returns is every bit the same.

On top of these, GraphQL taps into the deeply nested relational data structures, which are suitable for graph models. The abilities of GraphQL schema and query validation ensures that it can prevent distributed denial-of-service attacks, thereby preventing attempts at overloading queries. The benefits of GraphQL in Drupal are indeed far and many.

What Will the Future Look Like?

GraphQL is not a part of Drupal Core, being an advanced tool that is a little more complex to use. The future is not showing any signs of it becoming one either. However, there are other aspects to look forward to. GraphQL v4 for Drupal is one of the most awaited releases of the GraphQL module. It would bring along numerous improvements for the module that seems to be perpetually evolving. GraphQL schema will be in total control of Drupal developers, since schema customisation was the holy grail of this module, things are looking up and the future brighter. GraphQL and Drupal have a long way to go.

blog banner blog image GraphQL Drupal Decoupled Drupal REST API RESTful Web Services JSON API Drupal Web Services Blog Type Articles Is it a good read ? On

Greg Boggs: Drupal Merge Requests using Gitlab!

Planet Drupal - Fri, 2020/11/20 - 1:00am has used patches to manage community contribution for many years. But, patches are difficult for new users to learn and require the use of the command line. Recently, code has migrated to Gitlab, and we can now use Gitlab and issues to create Merge Requests to share code and fixes with modules and Drupal Core. Here’s an overview of creating a merge request for folks who want all the details.

Drupal Forks are Special

Drupal Gitlab forks are special because they are accessible to everyone with a account. One fork per issue, and anyone can edit the fork and open a merge request to the main project.

Setup your Drupal Account

Before we begin, you must register for an account on to get access to gitlab forks. Next, add an ssh public key to your user profile so you don’t need a password to work. Lastly, agree to the git terms of service so you can access forks.

Create an issue on Easy Breadcrumb

Before you can create forks on’s gitlab, you need an issue! So, start by creating an issue on Easy Breadcrumb.

Once you have the issue, press “Create issue fork”.

Clone Easy Breadcrumb

Next copy of the module you want to modify. You can get instructions for this by clicking version control near the top of the module.

git clone --branch 8.x-1.x cd easy_breadcrumb Make Your Changes

Now, edit the code on your own computer to fix the issue. This is the hardest part!

Send Your Work to the Issue Fork on Gitlab

The exact git commands vary slightly from issue to issue. So, check the “view commands” link on the issue to see the commands for your issue, but here are the ones I ran. I got the commit message from the commit credit section towards the bottom of the issue.

git remote add easy_breadcrumb-3174165 git fetch easy_breadcrumb-3174165 git checkout -b '8.x-1.x' --track easy_breadcrumb-3174165/'8.x-1.x' git add . git commit -m 'Issue #3174165 by kell.mcnaughton, pattsai: How to support limiting depth' --author="git <>" git push --set-upstream easy_breadcrumb-3174165 HEAD

The commands do the following:

  • Add the new fork as a remote.
  • Fetch the new fork.
  • Checkout a branch on that fork.
  • Add your work to that branch.
  • Commit your work to that branch.
  • Push your work to Gitlab.
Open a Merge Request

Once your work is saved to the issue fork on Gitlab. Go back to the issue on and click Compare near the top of the issue. Then, click open merge request!

Now that your request is open, the maintainer of the module just needs to press merge on Gitlab to make the code part of the project!


Nonprofit Drupal posts: November Drupal for Nonprofits Chat

Planet Drupal - Thu, 2020/11/19 - 3:41pm

Our normally scheduled call to chat about all things Drupal and nonprofits will happen TODAY, Thursday, November 19, at 1pm ET / 10am PT. (Convert to your local time zone.)

No set agenda this month, so we'll have plenty of time to discuss whatever Drupal-related thoughts are on your mind. 

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

Feel free to share your thoughts and discussion points ahead of time in our collaborative Google doc:

This free call is sponsored by and open to everyone.

View notes of previous months' calls.


Mediacurrent: How to Secure Your Website: An Intro to Drupal Security

Planet Drupal - Thu, 2020/11/19 - 3:06pm

Is Drupal’s open source platform secure?

When deciding on the best CMS to meet your organization’s digital vision, security is often one of the top concerns. 

Here’s the reality. ALL software (closed source, open source, or custom-developed) has the potential for security vulnerabilities. Web security is a fast and ever-changing world. What passes today as secure code may not stay the same tomorrow when new vulnerabilities surface.

There's peace of mind in knowing not only is Drupal a proven, secure CMS but that it's also in an active state of safeguarding against attacks. 

With proper planning, maintenance, and updating, open source software, like Drupal, can meet and even exceed the security standards of closed source. 

- Mediacurrent’s Mark Shropshire, Senior Director of Development, quoted in an excerpt from Setting the Record Straight on Drupal Myths: Acquia eBook 

Security and The Drupal Community

Open source software, like Drupal, has the bonus of having thousands of experts work on a particular problem. With entire teams and methodology devoted to ensuring its steadfast reputation as a secure CMS, it's comforting to know modules and code procured from the official Drupal site are as secure as possible.

Using Drupal means you never have to face these risks alone, let alone attempt to correct the problem by yourself. There's power in numbers when it comes to both discovering and fixing potential software flaws, and the Drupal community has those numbers.

Dedicated Security Team

The Drupal project has an approximately 30-person security team with a history of professionally handling security advisories. 

Community Code Review

One of the largest developer communities in the world, clocked 100,000 contributors and 1.3 million users at the time of Drupal 9’s release. Having many eyes on the source code ensures more issues are discovered and resolved. 

Rapid Response Time

Defined processes around reporting and resolving security issues accelerate the response time to fix vulnerabilities and release patches. 

Core Security

Core API tools and techniques address common security risks. Community projects such as the Guardr distribution help educate the community on best practices around Drupal security.

The Guardr distribution was created to enhance a Drupal application's security and availability to meet enterprise security requirements.

Proven High Standards 

Drupal-based organizations around the world — including enterprise-level brands, major universities, government, and large non-profits — put Drupal’s high security standards to the test every day.

Drupal Security Throughout the Website Process

The Drupal community has built-in security measures to combat threats — reassuring for sure. To proactively protect your site, the concept of security needs to be at top of mind when campaigns are being launched, systems/applications are being integrated, or when software is deployed or updated. 


A security-first approach means going beyond compliance to better assess risk. There are two paths to achieve this approach:

1) Culture: Adopting a security mindset culture for your organization. 

2) Automation: Taking on a continuous development plan that’s rooted in process automation.

In other words, start planning for security early and often throughout the website development process. 

Don’t wait until the project is about to launch to think about security! Explore our Guide to Open Source Security eBook for tips and processes to consider when putting together a security-first maintenance plan for your website and marking tech stack.

Developer Best Practices 

Here are three ways to safeguard your Drupal site: 

1. Choose the Right Modules

If you can dream up a feature for your site, chances are it can be found in the tens of thousands of community-contributed modules available for Drupal. With so many different options to pick from, how do you choose the most secure modules possible? Some steps to take are checking for how many sites are using the module, reviewing the issue queues, and avoiding deprecated or unsupported modules. 

Find more criteria for module decision-making in our Guide to Drupal Module Evaluation

2. Use Drupal APIs

Look to Drupal APIs documentation to secure your contrib or custom code on a project. Drupal APIs have been nurtured by the community and have built-in protections for database security. If you do write new code, whether it’s a small amount or a completely new module, the Writing Secure Code for Drupal is a must-see reference guide. 

3. Monitor Drupal Security Advisories 

The Drupal security team posts weekly security advisories to

To keep up with security releases, you can  sign up for email or RSS notifications, follow @drupalsecurity on Twitter, or join the Drupal Slack #security-questions

Sleep Better With a Secure Drupal Site 

For more best practices, check out the Mediacurrent presentation Sleep Better with a Secure Drupal Site: 

Are you ready to build a strong foundation for Drupal security but unsure where to start? There's a lot to consider in your security plan but it doesn't have to keep you up at night. Contact the Mediacurrent Security team for support. 


rachel_norfolk: Turning the ephemeral nature of free Slack into a feature

Planet Drupal - Thu, 2020/11/19 - 11:31am
Turning the ephemeral nature of free Slack into a feature

Whilst I fully appreciate I am no fan of Slack, for many reasons, not least who they seem to like to associate with, I do accept that not everyone shares my views and that Slack is at the heart of many open source communities.


Vardot: Drupal Powered Elections in Jordan

Planet Drupal - Thu, 2020/11/19 - 8:58am
Drupal Powered Elections in Jordan Image Firas Ghunaim Marketing Manager Thursday, November 19, 2020 - 09:58 Teaser image Drupal Powered Elections in Jordan Comments Solutions by industry Government Nonprofits and NGOs Solutions by need Enterprise CMS Drupal Managed Services On-Site SEO Related services Web Development Support and Maintenance Drupal Migration and Upgrades DevOps and Engineering Digital Marketing UI/UX Design Digital Strategy Product Varbase Vote up! 0 claps

Nextide Blog: Medical Applications with Drupal

Planet Drupal - Wed, 2020/11/18 - 8:59pm

In our blog post about Innovating Healthcare with Drupal, we talked about using Drupal to deliver an application that improves the healthcare experience for palliative care patients.  Our application was a resounding success.  The global COVID-19 pandemic hits and the need to keep people out of the Emergency Rooms to stop the spread of the Coronavirus suddenly becomes urgent.  To move the Drupal application out of tightly controlled pilots to a more widely distributed application requires adherence to HIPAA (USA) and PIPEDA (Canada) guidelines to safeguard patient information.  Unfortunately, the tried and tested Drupal DevOps and hosting environments we’ve become accu