OSTraining: How to Build User Profiles With Fields in Drupal 8

Planet Drupal - Thu, 2019/06/13 - 5:49pm

By default, a Drupal 8 user account collects only very basic information about the user. 

And, most of that information is not visible to visitors or other users on the site.

Fortunately, Drupal makes it easy to modify and expand this profile so that people can add useful information about themselves such as their real name (versus a username), address, employer, URLs, biography, and more.

Categories:

wishdesk.com: Image hover effect in Drupal 8 with Imagepin button module

Planet Drupal - Thu, 2019/06/13 - 2:08pm
Drupal 8 has easy content creation as a priority, and there are also many useful modules for creating image hover effect. Let’s take a look at a simple but nice one — the Imagepin button module.
Categories:

Lullabot: An Overview for Migrating Drupal Sites to 8

Planet Drupal - Thu, 2019/06/13 - 1:23pm

Over the past few months working on migrations to Drupal 8, researching best practices, and contributing to core and contributed modules, I discovered that there are several tools available in core and contributed modules, plus a myriad of how-to articles. To save you the trouble of pulling it all together yourself, I offer a comprehensive overview of the Migrate module plus a few other contributed modules that complement it in order to migrate a Drupal site to 8.

Let's begin with the most basic element: migration files.

Categories:

Srijan Technologies: Demystifying the Decoupled Architecture

Planet Drupal - Thu, 2019/06/13 - 11:18am

In a world where there is no limit to devices to access information, you must ensure your data is always available on the go! The pace of innovation in content management is accelerating along with the number of channels to support the web content.

Categories:

Web Omelette: Dynamic migrations using "templates" in Drupal 8

Planet Drupal - Thu, 2019/06/13 - 9:24am

This article is a companion to the presentation I held at the Drupal Dev Days 2019 conference in Cluj Napoca.

In this article we are going to explore some of the powers of the Drupal 8 migration system, namely the migration “templates” that allow us to build dynamic migrations. And by templates I don’t mean Twig templates but plugin definitions that get enhanced by a deriver to make individual migrations for each of the things that we need in the application. For example, as we will explore, each language.

The term “template” I inherit from the early days of Drupal 8 when migrations were config entities and core had migration (config) templates in place for Drupal to Drupal migrations. But I like to use this term to represent also the deriver-based migrations because it kinda makes sense. But it’s a personal choice so feel free to ignore it if you don’t agree.

Before going into the details of how the dynamic migrations works, let’s cover a few of the more basic things about migrations in Drupal 8.

What is a migration?

The very first thing we should talk about is what actually is a migration. The simple answer to this question is: a plugin. Each migration is a YAML-based plugin that actually brings together all the other plugins the migration system needs to run an actual logical migration. And if you don’t know what a plugin is, they are swappable bits of functionality that are meant to perform a similar task, depending on their type. They are all over core and by now there are plenty of resources to read more about the plugin system, so I won’t go into it here.

Migration plugins, unlike most others such as blocks and field types, are defined in YAML files inside the module’s migrations folder. But just like all other plugin types, they map to a plugin class, in this case Drupal\migrate\Plugin\Migration.

The more important thing to know about migrations, however, is the logical structure they follow. And by this I mean that each migration is made up of a source, multiple processors and a destination. Make sense right? You need to get some data (the source reads and interprets its format), prepare it for its new destination (the processors alter or transform the data) and finally save it in the destination (which has a specific format and behaviour). And to make all this happen, we have plugins again:

  • Source plugins
  • Process plugins
  • Destination plugins

Source plugins are responsible for reading and iterating over the raw data being imported. And this can be in many formats: SQL tables, CSV files, JSON files, URL endpoint, etc. And for each of these we have a Drupal\migrate\Plugin\MigrateSourceInterface plugin. For average migrations, you’ll probably pick an existing source plugin, point it to your data and you are good to go. You can of course create your own if needed.

Destination plugins (Drupal\migrate\Plugin\MigrateDestinationInterface) are closely tied to the site being migrated into. And since we are in Drupal 8, these relate to what we can migrate to: entities, config, things like this. You will very rarely have to implement your own, and typically you will use an entity based destination.

In between these two, we have the process plugins (Drupal\migrate\Plugin\MigrateProcessInterface), which are admittedly the most fun. There are many of them already available in core and contrib, and their role is to take data values and prepare them for the destination. And the cool thing is that they are chainable so you can really get creative with your data. We will see in a bit how these are used in practice.

The migration plugin is therefore a basic definition of how these other 3 kinds of plugins should be used. You get some meta, source, process, destination and dependency information and you are good to go. But how?

That’s where the last main bit comes into play: the Drupal\migrate\MigrateExecutable. This guy is responsible for taking a migration plugin and “running” it. Meaning that it can make it import the data or roll it back. And some other adjacent things that have to do with this process.

Migrate ecosystem

Apart from the Drupal core setup, there are few notable contrib modules that any site doing migrations will/should use.

One of these is Migrate Plus. This module provides some additional helpful process plugins, the migration group configuration entity type for grouping migrations and a URL-based source plugin which comes with a couple of its own plugin types: Drupal\migrate_plus\DataFetcherPluginInterface (retrieve the data from a given protocol like a URL or file) and Drupal\migrate_plus\DataParserPluginInterface (interpret the retrieved data in various formats like JSON, XML, SOAP, etc). Really powerful stuff over here.

Another one is Migrate Tools. This one essentially provides the Drush commands for running the migrations. To do so, it provides its own migration executable that extends the core one to add all the necessary goodies. So in this respect, it’s a critical module if you wanna actually run migrations. It also makes an attempt at providing a UI but I guess more of that will come in the future.

The last one I will mention is Migrate Source CSV. This one provides a source plugin for CSV files. CSV is quite a popular data source format for migrations so you might end up using this quite a lot.

Going forward we will use all 3 of these modules.

Basic migration

After this admittedly long intro, let’s see how one of these migrations looks like. I will create one in my advanced_migrations module which you can also check out from Github. But first, let’s see the source data we are working with. To keep things simple, I have this CSV file containing product categories:

id,label_en,label_ro B,Beverages,Bauturi BA,Alcohols,Alcoolice BAB,Beers,Beri BAW,Wines,Vinuri BJ,Juices,Sucuri BJF,Fruit juices,Sucuri de fructe F,Fresh food,Alimente proaspete

And we want to import these as taxonomy terms in the categories vocabulary. For now we will stick with the English label only. We will see after how to get them translated as well with the corresponding Romanian labels.

As mentioned before, the YAML file goes in the migrations folder and can be named advanced_migrations.migration.categories.yml. The naming is pretty straightforward to understand so let’s see the file contents:

id: categories label: Categories migration_group: advanced_migrations source: plugin: csv path: 'modules/custom/advanced_migrations/data/categories.csv' header_row_count: 1 keys: - id column_names: 0: id: 'Unique Id' 1: label_en: 'Label EN' 2: label_ro: 'Label RO' destination: plugin: entity:taxonomy_term process: vid: plugin: default_value default_value: categories name: label_en

It’s this simple. We start with some meta information such as the ID and label, as well as the migration group it should belong to. Then we have the definitions for the 3 plugin types we spoke about earlier:

Source

Under the source key we specify the ID of the source plugin to use and any source specific definition. In this case we point it to our CSV file, and kind of “explain” it how to understand the CSV file. Do check out the Drupal\migrate_source_csv\Plugin\migrate\source\CSV plugin if you don’t understand the definition.

Destination

Under the destination key we simply tell the migration what to save the data as. Easy peasy.

Process

Under the process key we do the mapping between our data source and the destination specific “fields” (in this case actual Drupal entity fields). And in this mapping we employ process plugins to get the data across and maybe alter it.

In our example we migrate one field (the category name) and for this we use the Drupal\migrate\Plugin\migrate\process\Get process plugin which is assumed unless one is actually specified. All it does is copies the raw data as it is without making any change. It’s the very most basic and simple process plugin. And since we are creating taxonomy terms, we need to specify a vocabulary which we don’t necessarily have to take from the source. In this case we don’t actually because we want to import all the term into the categories vocabulary. So we can use the Drupal\migrate\Plugin\migrate\process\DefaultValue plugin to specify what value should be saved in that field for each term we create.

And that’s it. Clearing the cache, we can now see our migration using Drush:

drush migrate:status

This will list our one migration and we can run it as well:

drush migrate:import categories

Bingo bango we have categories. Roll them back if you want with:

drush migrate:rollback categories Dynamic migration

Now that we have the categories imported in English, let’s see how we can import their translations as well. And for this we will use a dynamic migration using a “template” and a plugin deriver. But first, what are plugin derivatives?

Plugin derivatives

The Drupal plugin system is an incredibly powerful way of structuring and leveraging functionality. You have a task in the application that needs to be done and can be done in multiple ways? Bam! Have a plugin type and define one or more plugins to handle that task in the way they see fit within the boundaries of that subsystem.

And although this is powerful, plugin derivatives are what really makes this an awesome thing. Derivatives are essentially instances of the same plugin but with some differences. And the best thing about them is that they are not defined entirely statically but they are “born” dynamically. Meaning that a plugin can be defined to do something and a deriver will make as many derivatives of that plugin as needed. Let’s see some examples from core to better understand the concept.

Menu links:

Menu links are plugins that are defined in YAML files and which map to the Drupal\Core\Menu\MenuLinkDefault class for their behaviour. However, we also have the Menu Link Content module which allows us to define menu links in the UI. So how does that work? Using derivatives.

The menu links created in the UI are actual content entities. And the Drupal\menu_link_content\Plugin\Deriver\MenuLinkContentDeriver creates as many derivatives of the menu link plugin as there are menu link content entities in the system. Each of these derivatives behave almost the same as the ones defined in code but contain some differences specific to what has been defined in the UI by the user. For example the URL (route) of the menu link is not taken from a YAML file definition but from the user-entered value.

Menu blocks:

Keeping with the menu system, another common example of derivatives is the menu blocks. Drupal defines a Drupal\system\Plugin\Block\SystemMenuBlock block plugin that renders a menu. But on its own, it doesn’t do much. That’s where the Drupal\system\Plugin\Derivative\SystemMenuBlock deriver comes into play and creates a plugin derivate for all the menus on the site. In doing so, augments the plugin definitions with the info about the menu to render. And like this we have a block we can place for each menu on the site.

Migration deriver

Now that we know what plugin derivatives are and how they work, let’s see how we can apply this to our migration to import the category translations. But why we would actually use a deriver for this? We could simply copy the migration into another one and just use the Romanian label as the term name no? Well yes…but no.

Our data is now in 2 languages. It could be 23 languages. Or it could be 16. Using a deriver we can make a migration derivative for each available language dynamically and simply change the data field to use for each. Let’s see how we can make this happen.

The first thing we need to do is create another migration that will act as the “template”. In other words, the static parts of the migration which will be the same for each derivative. And as such, it will be like the SystemMenuBlock one in that it won’t be useful on its own.

Let’s call it advanced_migrations.migration.category_translations.yml:

id: category_translations label: Category translations migration_group: advanced_migrations deriver: Drupal\advanced_migrations\CategoriesLanguageDeriver source: plugin: csv path: 'modules/custom/advanced_migrations/data/categories.csv' header_row_count: 1 keys: - id column_names: 0: id: 'Unique Id' 1: label_en: 'Label EN' 2: label_ro: 'Label RO' destination: plugin: entity:taxonomy_term translations: true process: vid: plugin: default_value default_value: categories tid: plugin: migration_lookup source: id migration: categories content_translation_source: plugin: default_value default_value: 'en' migration_dependencies: required: - categories

Much of it is like the previous migration. There are some important changes though:

  • We use the deriver key to define the deriver class. This will be the class that creates the individual derivative definitions.
  • We configure the destination plugin to accept entity translations. This is needed to ensure we are saving translations and not source entities. Check out Drupal\migrate\Plugin\migrate\destination\EntityContentBase for more info.
  • Unlike the previous migration, we define also a process mapping for the taxonomy term ID (tid). And we use the migration_lookup process plugin to map the IDs to the ones from the original migration. We do this to ensure that our migrated entity translations are associated to the correct source entities. Check out Drupal\migrate\Plugin\migrate\process\MigrationLookup for how this plugin works.
  • Specific to the destination type (content entities) we need to import a default value also in the content_translation_source if we want the resulting entity translation to be correct. And we just default this to English because that was the default language the original migration imported in. This is the source language in the translation set.
  • Finally, because we need to lookup in the original migration, we also define a migration dependency on the original migration. So that the original gets run, followed by all the translation ones.

You’ll notice another important difference: the term name is missing from the mapping. That will be handled in the deriver based on the actual language of the derivative because this is not something we can determine statically at this stage. So let’s see that now.

In our main module namespace we can create this very simple deriver (which we referenced in the migration above):

namespace Drupal\advanced_migrations; use Drupal\Component\Plugin\Derivative\DeriverBase; use Drupal\Core\Language\LanguageInterface; use Drupal\Core\Language\LanguageManagerInterface; use Drupal\Core\Plugin\Discovery\ContainerDeriverInterface; use Symfony\Component\DependencyInjection\ContainerInterface; /** * Deriver for the category translations. */ class CategoriesLanguageDeriver extends DeriverBase implements ContainerDeriverInterface { /** * @var \Drupal\Core\Language\LanguageManagerInterface */ protected $languageManager; /** * CategoriesLanguageDeriver constructor. * * @param \Drupal\Core\Language\LanguageManagerInterface $languageManager */ public function __construct(LanguageManagerInterface $languageManager) { $this->languageManager = $languageManager; } /** * {@inheritdoc} */ public static function create(ContainerInterface $container, $base_plugin_id) { return new static( $container->get('language_manager') ); } /** * {@inheritdoc} */ public function getDerivativeDefinitions($base_plugin_definition) { $languages = $this->languageManager->getLanguages(); foreach ($languages as $language) { // We skip EN as that is the original language. if ($language->getId() === 'en') { continue; } $derivative = $this->getDerivativeValues($base_plugin_definition, $language); $this->derivatives[$language->getId()] = $derivative; } return $this->derivatives; } /** * Creates a derivative definition for each available language. * * @param array $base_plugin_definition * @param LanguageInterface $language * * @return array */ protected function getDerivativeValues(array $base_plugin_definition, LanguageInterface $language) { $base_plugin_definition['process']['name'] = [ 'plugin' => 'skip_on_empty', 'method' => 'row', 'source' => 'label_' . $language->getId(), ]; $base_plugin_definition['process']['langcode'] = [ 'plugin' => 'default_value', 'default_value' => $language->getId(), ]; return $base_plugin_definition; } }

All plugin derivers extend the Drupal\Component\Plugin\Derivative\DeriverBase and have only one method to implement: getDerivativeDefinitions(). And to make our class container aware, we implement the deriver specific ContainerDeriverInterface that provides us with the create() method.

The getDerivativeDefinitions() receives an array which contains the base plugin definition. So essentially our entire YAML migration file turned into an array. And it needs to return an array of derivative definitions keyed by their derivative IDs. And it’s up to us to say what these are. In our case, we simply load all the available languages on the site and create a derivative for each. And the definition of each derivative needs to be a “version” of the base one. And we are free to do what we want with it as long as it still remains correct. So for our purposes, we add two process mappings (the ones we need to determine dynamically):

  • The taxonomy term name. But instead of the simple Get plugin, we use the Drupal\migrate\Plugin\migrate\process\SkipOnEmpty one because we don’t want to create a translation at all for this record if the source column label_[langcode] is missing. Makes sense right? Data is never perfect.
  • The translation langcode which defaults to the current derivative language.

And with this we should be ready. We can clear the cache and inspect our migrations again. We should see a new one with the ID category_translations:ro (the base plugin ID + the derivative ID). And we can now run this migration as well and we’ll have our term translations imported.

Other examples

I think dynamic migrations are extremely powerful in certain cases. Importing translations is an extremely common thing to do and this is a nice way of doing it. But there are other examples as well. For instance, importing Commerce products. You’ll create a migration for the products and one for the product variations. But a product can have multiple variations depending on the actual product specification. For example, the product can have 3 prices depending on 3 delivery options. So you can dynamically create the product variation migrations for each of the delivery option. Or whatever the use case may be.

Conclusion

As we saw, the Drupal 8 migration system is extremely powerful and flexible. It allows us to concoct all sorts of creative ways to read, clean and save our external data into Drupal. But the reason this system is so powerful is because it rests on the lower-level plugin API which is meant to be used for building such systems. So migrate is one of them. But there are others. And the good news is that you can build complex applications that leverage something like the plugin API for extremely creative solutions. But for now, you learned how to get your translations imported which is a big necessity.

Categories:

OSTraining: Give a Unique Look to Your Google Maps in Drupal

Planet Drupal - Thu, 2019/06/13 - 8:44am

Google Maps don't look appealing or pretty by default when you embed them in your Drupal content. Nor do they always nicely coordinate with your site look and feel.

What if you found a way to give them a custom design? For example - your own color? In this tutorial, you will learn how to give your Drupal Google Maps a custom style with the Styled Google Map contrib module.

Categories:

Drupal Association blog: Drupal Association Board Elections, 2019

Planet Drupal - Thu, 2019/06/13 - 8:42am

With Drupal 9 approaching rapidly, it is an exciting time to be on the Drupal Association Board. The Association must continue to evolve alongside the project so we can continue providing the right kind of support. And, it is the Drupal Association Board who develops the Association’s strategic direction by engaging in discussions around a number of strategic topics throughout their term. As a community member, you can be a part of this important process by becoming an At-large Board Member.

We have two At-large positions on the Association Board of Directors. These positions are self-nominated and then elected by the community. Simply put, each At-large Director position is designed to ensure there is community representation on the Drupal Association Board.

Inclusion 2018

In 2018, we made a special effort to encourage geographic inclusion through the people who were candidates for election and we were delighted that candidates stood in six continents all across the World — thank you!

2019

Now, in 2019, and recognising we are in the middle of Pride Month, we want to particularly encourage nominations from candidates from underrepresented or marginalised groups in our community. As referenced later in this blog post, anyone is eligible to nominate themselves, and voters can vote for whichever candidate they choose, but we want to encourage this opportunity to amplify the voices of underrepresented groups with representation on the Association Board. And as we meet the candidates, whether they are allies or members of these groups themselves, we hope to center issues of importance to these communities - in addition to the duties of care for the management of the Association that are always central to a board role.

As always, any individual can stand for election to the board, but by centering these important issues we are determined to encourage a board made of diverse members as that gives them the best ability to represent our diverse community.

If you are interested in helping shape the future of the Drupal Association, we encourage you to read this post and nominate yourself between 29 Jun, 2019 and 19 July 2019.

What are the Important Dates?

Self nominations: 29 Jun, 2019 to 19 July, 2019

Meet the candidates: 22 July, 2019 to 26 July, 2019

Voting: 1 August, 2019 to 16 August, 2019

Votes ratified, Winner announced: 3 September, 2019

How do nominations and elections work?

Specifics of the election mechanics were decided through a community-based process in 2012 with participation by dozens of Drupal community members. More details can be found in the proposal that was approved by the Drupal Association Board in 2012 and adapted for use this year.

What does the Drupal Association Board do?

The Board of Directors of the Drupal Association are responsible for financial oversight and setting the strategic direction for serving the Drupal Association’s mission, which we achieve through Drupal.org and DrupalCon. Our mission is: “Drupal powers the best of the Web.  The Drupal Association unites a global open source community to build and promote Drupal.”

New board members will contribute to steer? shape? the strategic direction of the Drupal Association. Board members are advised of, but not responsible for, matters related to the day-to-day operations of the Drupal Association including program execution, staffing, etc.

Directors are expected to contribute around five hours per month and attend three in-person meetings per year (financial assistance is available if required).

Association board members, like all board members for US-based organizations, have three legal obligations: duty of care, duty of loyalty, and duty of obedience. In addition to these legal obligations, there is a lot of practical work that the board undertakes. These generally fall under the fiduciary responsibilities and include:

  • Overseeing Financial Performance

  • Setting Strategy

  • Setting and Reviewing Legal Policies

  • Fundraising

  • Managing the Executive Director

To accomplish all this, the board comes together three times a year during two-day retreats. These usually coincide with the North American and major European Drupal Conferences, as well as one February meeting. As a board member, you should expect to spend a minimum of five hours a month on board activities.

Some of the topics that will be discussed over the next year or two are:

  • Strengthen sustainability

  • Grow Drupal adoption through our channels and partner channels

  • Evolve drupal.org and DrupalCon goals and strategies.

Who can run?

There are no restrictions on who can run, and only self-nominations are accepted.

Before self-nominating, we want candidates to understand what is expected of board members and what types of topics they will discuss during their term. That is why we now require candidates to:

What will I need to do during the elections?

During the elections, members of the Drupal community will ask questions of candidates. You can post comments on candidate profiles here on assoc.drupal.org.

In the past, we held group “meet the candidate” interviews. With many candidates the last few years, group videos didn’t allow each candidate to properly express themselves. We replaced the group interview and allow candidates to create their own 3-minute video and add it to their candidate profile page. These videos must be posted by 19 July, 2019, and the Association will promote the videos to the community from 22 July, 2019. Hint: Great candidates would be those that exemplify the Drupal Values & Principles. That might provide structure for a candidate video? You are also encouraged to especially consider diversity and inclusion.

How do I run?

From 29 June, 2019, go here to nominate yourself.  If you are considering running, please read the entirety of this post, and then be prepared to complete the self-nomination form. This form will be open on 29 June, 2019 through 19 July, 2019 at midnight UTC. You'll be asked for some information about yourself and your interest in the Drupal Association Board. When the nominations close, your candidate profile will be published and available for Drupal community members to browse. Comments will be enabled, so please monitor your candidate profile so you can respond to questions from community members. We will announce the new board member via our blog and social channels on 3 September, 2019.

Reminder, you must review the following materials before completing your candidate profile:

Who can vote?

Voting is open to all individuals who have a Drupal.org account by the time nominations open and who have logged in at least once in the past year. If you meet this criteria, your account will be added to the voters list on association.drupal.org and you will have access to the voting.

To vote, you will rank candidates in order of your preference (1st, 2nd, 3rd, etc.). You do not need to enter a vote on every candidate. The results will be calculated using an "instant runoff" method. For an accessible explanation of how instant runoff vote tabulation works, see videos linked in this discussion.

Elections process

Voting will be held from 1 August, 2019. During this period, you can review and comment on candidate profiles on assoc.drupal.org.

Finally, the Drupal Association Board will ratify the election and announce the winner on 3 September, 2019.

Have questions? Please contact Drupal Association Community Liaison, Rachel Lawson.

Finally, many thanks to nedjo for pioneering this process and documenting it so well!

Categories:

OSTraining: How to Log In to Drupal Without the Login Block

Planet Drupal - Thu, 2019/06/13 - 7:09am

This is actually quite a common question from our students. They start building their Drupal site. Then they go to work with their blocks or menus.

Then they accidentally disable the "Log in" menu link. There is no "Log in" link displayed on the site anymore. Neither for them nor for their visitors.

In this short tip, you will learn how to login to your Drupal admin page in such situation. 

Categories:

heykarthikwithu: Perform HTTP request in Drupal 7

Planet Drupal - Thu, 2019/06/13 - 7:02am
Perform HTTP request in Drupal 7

To Perform an HTTP request in Drupal 7 we can use "drupal_http_request" function. This is a flexible and powerful HTTP client implementation. Correctly handles GET, POST, PUT or any other HTTP requests. Handles redirects.

heykarthikwithu Thursday, 13 June 2019 - 10:32:53 IST
Categories:

Evolving Web: Top 4 Takeaways from Drupal North Day 1

Planet Drupal - Thu, 2019/06/13 - 5:48am

Today marked the kick-off of Drupal North 2019, and Evolving Web is excited to be a part of it for the 4th year in a row. Day 1 was packed with trainings, summits (for the 1st time!), and networking opportunities. Here were the key takeaways we saw:

Drupal is for everyone

In the "What is Drupal?" and "Qu'est-ce que c'est Drupal?" trainings by Evolving Web's own Trevor Kjorlien and Adrian Cid Almaguer, everyone from developers, to project managers, to graphic designers and more, took part in a hands-on demonstration on how to build a site with Drupal.

Nobody wants a website

A website is just a tool for you to achieve your larger goals. Whether that be building a community, selling a product, getting donations, providing information, or anything else, your website has to be designed with your goals in mind. That being said:

Focus on what your audience wants, not what you want

Your website should always be making your audience's life easier and give them what they are looking for as quickly as possible. It's important to step out of your own shoes and into theirs in order to have a good understanding of want they want so you can cater to those needs.

Students really love chocolate

While sharing her experiences in getting students to participate in UX/UI studies, Joyce Peralta from McGill University explained that sometimes it's the small incentives that can be the most effective. Through many attempts, she found that students could be easily swayed by a simple table full of chocolate bars situated in a prime location in the library. Simple but effective!

Drupal North started off on a great foot and we're looking forward to the next two days of sessions. If you're attending, make sure to check out presentations from our team:

+ more awesome articles by Evolving Web
Categories:

Jacob Rockowitz: Webform Open Collective Office Hours

Planet Drupal - Wed, 2019/06/12 - 8:08pm

In my post, Drupal is frustrating, I stated that enterprise websites need, want, and are willing to pay for better support options when using Open Source software. Organizations have reached out to me as a Webform module subject matter expert (SME) seeking to start a 1-to-1 support relationship. Occasionally, these relationships result in a sponsored feature request. Sometimes organizations want to ask me a simple question or at least know that I am available to answer questions. In the past, I shied away from the idea of setting up regular office hours because it would be an unpaid commitment of my time during business hours. Fortunately, with the existing funds collected by the Webform module's Open Collective, I feel that now is a good time to experiment and set up some initial office hours for the Webform module.

About office hours

The goal of office hours is to make it easier for me to help people and organizations with questions and issues related to the Webform module for Drupal 8 as well as to assist current and future Webform module contributors.

Sponsor office hours

Sponsor office hours are intended to help backers of the Webform module's Open Collective with any Webform related questions or challenges. These office hours will be strictly for monthly sponsors and backers of the Webform module's Open Collective.

Add-ons office hours

Add-ons office hours are for anyone in the Drupal community building Webform add-ons and extensions that are being contributed back to the open source community. The goal of these hours is to help support and improve the quality of the projects and community around the Webform module.

Office hour guidelines

I've been...Read More

Categories:

Palantir: Leading Patient Engagement Solutions Company

Planet Drupal - Wed, 2019/06/12 - 7:58pm
Leading Patient Engagement Solutions Company brandt Wed, 06/12/2019 - 12:58

Content modeling as a practical foundation for future scalability in Drupal.

Content modeling as a practical foundation for future scalability On

Palantir recently partnered with a patient engagement solutions company that specializes in delivering patient and physician education to deliver improved health outcomes and an enhanced patient experience. They have an extensive library of patient education content that they use to build education playlists which are delivered to more than 51,000 physician offices, 1,000 hospitals, and 140,000 healthcare providers - and they are still growing.

The company is in the process of completely overhauling their technical stack so that they can rapidly scale up the number of products they use to deliver their patient education library. Currently, every piece of content needs to be entered separately for each product it can be delivered on, which forces the content teams to work in silos. In addition, because they use a dozen different taxonomies and doing so correctly requires a high level of context and nuance, any tagging of content can only be done at the manager level or above. The company partnered with Palantir.net to remove these bottlenecks and plan for future scalability.

Key Outcome

Palantir teamed up with this patient engagement solutions company to develop a master content model that:

  • Captures key content types and their relationships
  • Creates a standardized structure for content, including fields that enable serving content variations based on end-point devices and localization
  • Incorporates a taxonomy that enables content admins to quickly filter and select content relevant to their needs and device
Enabling Scalable Growth

The company’s content library is only getting larger over time, so the core need driving the master content model is to enable scalable growth. Specifically, that means a future state where:

  • New products can be added and old products deprecated without restructuring content. 
  • Content filtering can scale up for new product capabilities, languages, and specialties without having to be fundamentally reworked. 
  • Clients using the taxonomy find it intuitive and require minimal specific training to create and amend their own patient education playlists. 

These principles guided our recommendations for the content model and taxonomy.

Content Model

Our client’s content model is currently organized by the end product that content is delivered through - for example, a waiting room screen vs. an interactive exam room touchscreen. This approach requires the digital team to enter the same piece of content multiple times.

To streamline this process for the team, we recommended a master content model that is organized by the purpose of the content, including the mindset of the audience and the high-level strategy for delivering value with that content.

For example, a “highlight” is a small piece of content intended to engage the audience and draw them into deeper exploration, while a “quiz” is a test of knowledge of a particular topic as training or entertainment.

This approach allows the company to separate the content types from products, which in turn makes them easier to scale. For example, this wireframe shows how a single piece of quiz content can be delivered on a range of endpoint devices depending on which fields that device uses. This approach allows us to show how a quiz might be delivered on a voice device, which is a product the company does not yet support, but could in the future.

“Our content is tailored to different audiences with different endpoints. Palantir took the initiative to not only learn about all of our content paths, but to also learn how our content managers interact with it on a daily basis. We’ve relied heavily on their expertise, especially for taxonomy, and they delivered.”

Executive Vice President, Content & Creative

Taxonomy

The company’s taxonomy has 12 separate vocabularies, and using them to construct meaningful content playlists requires a deep understanding of both the content and the audience. Existing content has been tagged based on both the information it contains and based on the patients to whom it would be relevant.

For example, a significant proportion of cardiology patients are affected by diabetes, so a piece of content titled "Healthy Eating with Diabetes" would be tagged with both "Diabetes" and "Cardiology". Additionally, many tags have subtle differences in how they are used — when do you use "cardiology" vs. "cardiovascular conditions"? "OB/GYN" vs. "Women's Health"?

This system requires that everyone managing the content — from content creators to healthcare providers and staff selecting content to appear in their medical practice — understand the full set of terms and the nuance of how they are applied in order to tag content consistently.

Our goal was to develop a taxonomy that can be used to filter content effectively without requiring deep platform-specific context and nuance.

Our guiding principles were to:

  • Tag based on the information in the content.
  • Use terms that are meaningful to a general audience.
  • Use combinations of tags to provide granularity.
  • Avoid duplicate information that is available as properties of the content

We ultimately recommended a set of eight vocabularies. Two of them are based on company-specific business processes, and the remaining six are standards-based so that any practitioner can use them. By using combinations of terms, users can create playlists that are balanced in terms of educational and editorial content.

For example, in our recommended taxonomy, relevant content is tagged as referencing diabetes, so that the person building the playlist can still construct effective content playlists, without needing to carry in their head the nuance that many cardiology patients are also diabetic.

Moving Forward With Next Steps

This content modeling engagement spanned 9 weeks, and the Palantir team delivered:

  • A high-level content model identifying the core content types and their relationships
  • A set of global content fields that all content types in the model should have
  • A field level content model for the four most important content types
  • A new taxonomy approach based on internal user testing
  • A Drupal Demo code base showing how the content types and taxonomy can be built in Drupal 8

 

In the future, the company’s ultimate goal for the platform is to scale their engagement offerings with new content and new technology. With our purpose-driven content model and refined taxonomy, the company can scale their business by breaking down internal content silos and making tagging and filtering content consistent and predictable for their internal team and eventually, their customers. Palantir’s master content modeling work forms a practical foundation for the company’s radical re-platforming work.

Categories:

Sooper Drupal Themes: Open Source Software: Here is why it's OUR future

Planet Drupal - Wed, 2019/06/12 - 4:16pm
The World is Moving Towards Open Source Software

Open source software has been around for some time now. When it first came out, open source software was perceived as risky and immature. However, with the passage of time, more and more companies started developing and building upon open source. A couple of great open source examples that have been pioneering the industry for a while now are Drupal CMS and Linux OS.

What is Open Source Software?

So, what exactly is open source software? Well, open source describes the type of software that has no proprietary license attached to it. Instead, it's published with a license that guarantees the software will forever be free to download, distribute, and use. This also means that unlike proprietary software, the code can be inspected by anybody. On top of that, if somebody wants to customize the code to their needs by changing it, they are free to do it.

Proprietary software is often the exact opposite. The code of proprietary software cannot be copied and distributed freely, modifications to the code are also prohibited, in case there are issues arising, you cannot fix them by yourself. You have to rely on the software vendor to fix the problem for you.

Open source has its set of advantages as well as its disadvantages. 

Advantages of Open Source Software

So, you might wonder what are the specific advantages of open source as opposed to software with a proprietary license. Here are some advantages:

  • Flexibility: Open source software is known for having great flexibility. The great flexibility is granted by the fact that the code is open. Thus, people are able to customize it to their needs.

  • Speed: Competition in the digital era is fiercer than ever before. One of the defining factors that are dictating the success of a company over its competition is the speed of innovation. Luckily, the companies that are using open source software know that open source facilitates speed. By not having to deal with the bureaucracy that comes when dealing with proprietary software, everything can be set-up to be working in a fast and reliable way.

  • Cost efficiency: Another trump card in the arsenal of open source software is the cost efficiency provided. Open source can be used by anyone free of charge because it is registered under the GNU General Public License which basically ensures that if somebody is using open source software, then they also have to make the code available for other people to be able to use it. Successful open source communities leverage the power of the community by providing good infrastructure for the community to share and review software extensions and improvements.

  • Security: Proprietary software has had a reputation of being more secure than the open source counterpart. Part of this was due to the popular belief that if the source code is hidden from the public, then hackers will have a harder time cracking it. However, this is far from the truth. The code for open source software is available for everybody to see, which, in turn, could make it more vulnerable. However, because of the fact that everyone has access to it, it is easier to peer review the code. In this way, people will be able to spot vulnerabilities way easier than with proprietary code, making it easier for developers to fix said vulnerabilities.

Disadvantages of Open Source Software

Now that we’ve talked about the advantages of open source, we should also discuss its shortcomings.

  • Not user-friendly: A common problem with open source projects is a lack of focus on design and user-friendliness. People might have a harder time being able to adapt to the interface of an open source software compared to competing proprietary platforms. Of course, this is not true for all open source projects, but it is common to see that well-funded companies are better able to attract and afford the best designers.

  • Hidden costs: Although open source software is hailed to be free to use, it actually is not. When adopting new software for a business, a decision maker also has to take into account different factors. For example, it is easy to overlook the cost of setting up and customizing the software for the company, paying for the training of the employees or hiring skilled personnel that is able to actually operate the software. Even if the adoption is not for business use, a time investment still has to be made in order to properly be able to use the software to its full potential.

  • Lackluster support:  When it comes to proprietary software, there are often dedicated departments that are ready to help a struggling user with their issues. In contrast, most open source software does not enjoy the same level of support. However, open source tends to gather dedicated communities around it that can be helpful in solving some issues. However, it’s good to keep in mind that these people are not paid for their service and might not be able to solve all the issues that are arising.

  • Orphan Software: Proprietary software can enjoy a longer lifespan than their open source counterparts. One of the risks of using OSS is that the community or developers or both lose interest in the project or move on to another project. What this means is that the software will stop being developed supported. The users of the software will be left high and dry and will have to migrate to another platform. Of course, there are also plenty of commercial software projects that go out of business, but strong commercial backing does increase confidence in the continuity of the software. Some open source projects have loosely associated commercial backing. Like Redhat backing Linux and Acquia backing Drupal.

Tech Giants buy Open Source Software Companies

Lately, more and more tech giants are willing to start having some presence on the open source market. A couple of these examples are IBM, AT&T and Microsoft.

IBM acquires Red Hat

On 28 October 2018, IBM acquired Red Hat for $34 billion, a gargantuan amount of money. The aim of this acquisition is for IBM to shape the cloud and open source market for the years to come. IBM is betting a lot of money on this acquisition, in order to secure a lead on the market. However, there are some skeptics of this acquisition. They claim that IBM is going to ruin the Red Hat culture, as it was proven by their track record until now, kind of like some sort of corporate colonization. Only time will tell how this acquisition is going to shape the future of open source software. Nevertheless, the willingness of IBM to dish out so much money proves that open source software is seriously a path of the future.

AT&T acquires AlienVault

AlienVault is a developer of an open source solution that manages cyber attacks. It includes the Open Threat Exchange which is the world's largest crowd-sourced computer security platform. It was acquired by AT&T on August 22 in 2018. Since then it was renamed from AlienVault to AT&T Cybersecurity. With the high reach and resources of AT&T, former AlienVault is sure to have a bigger impact on the cyber safety of the world. However, this acquisition sparked a lot of controversies, mainly with some supporters of AlienVault claiming that this is the end for the brand. Well, this is true since the company was renamed to AT&T Cybersecurity. However, time will tell if there are going to be more radical changes to their business model under the ownership of AT&T.

Acquia acquires Mautic

With the acquisition of the open source marketing automation tool Mautic on 8 May 2019, Acquia is aiming to strengthen its presence on the open source software scene. Together with Mautic, Acquia is going to deliver the only open source solution to proprietary alternatives, expanding on Acquia's vision to deliver the industry's first Open Digital Experience Platform.  On top of that, unlike the other two companies, Acquia has a strong open source culture, making the acquisition of Mautic a well-thought business decision.

Apps, Plug-ins, and Services: When Open Source  Mingles With Closed Source Software Android, Google, and Huawei

Android is an open source operating system for mobile phones. Formally, it is known as the AOSP (Android Open Source Project). It is a project developed by Google. The OS is based on a modified version of the Linux kernel and is designed primarily for touchscreen mobile devices. It is licensed under Apache 2.0 which makes it possible for users to modify and distribute modifications if they choose to. Even so, in the recent case of the U.S. ban of Huawei, Google announced the new trade embargo forced them to retract Huawei's Android license. Now, since Android is open source, the OS itself is still free to use. However, practically all Android devices outside of China come with Google services and apps pre-installed. These Google apps play an important role in any Android device. Google can do this since apps like Google Maps, Youtube, Gmail and Play Store, etc. are not open source and companies need a license agreement in order to have them on their device. The Google play store is also a paid service, it provides security checks and code validation for app updates. This forms a very important security layer on the Android platform.

To add insult to injury, losing the partnership with Google means Huawei will not get timely security updates to the AOSP Android Platform. When Google fixes vulnerabilities, they will first send out their fix to partners, and after partners have had time to publish the update to their devices the patch will become public. This means Huawei's devices will have increased exposure to hackers and viruses before the security patch is published and pushed to Huawei devices.  

Sooperthemes: Providing and Supporting Paid Drupal Extensions

Here at Sooperthemes, we are passionate about the Drupal project. We want to see Drupal thrive and become better than its competitors. In order to do that, we had to find out what are the areas in which Drupal can be improved. As it turns out, there was a strong need for Drupal to be easier to navigate and to use in site-building for users who are in a marketing or communication department and do not have deep technical knowledge. That's why Sooperthemes has developed Glazed Builder. Glazed Builder is a powerful visual page-builder that anyone can use, without needing to write, or see any code. With Glazed Builder, Sooperthemes wants to give accessibility to the power of Drupal to a wider audience and to make it easy for them to build, maintain, and grow a Drupal-based website. 

Although other open source platforms like Android, WordPress, and even Linux OS have had a thriving ecosystem of paid applications and plugins for many years, the same cannot be said for Drupal. Fortunately, with our 13+ years of experience in the Drupal community, we were able to create a combination of product and service that thrives in the Drupal community.  

Conclusion

As it can be seen by the latest trends, open source seems to be here to stay and to become the staple of software in the near future. This prediction is based not only on the benefits that open source software is bringing but also by the amount of interest that major companies in the tech world are showing towards open source software. The most successful recipe seems to be a mix of open source platform and paid-for applications. The paid-for applications are especially handy for components that require more involvement from marketing and UX design experts, who are not typical contributors in open source software communities.

Categories:

Drudesk: UpTime Widget Drupal module to show website reliability

Planet Drupal - Wed, 2019/06/12 - 4:00pm

There are many beautiful words to tell your customers that your website is trustworthy, reliable, and transparent. But one small widget can say it better that a thousand words.

So let us introduce the UpTime Widget Drupal module. See how it could help you always stay aware of your website uptime, build customer trust, and stand out from competitors.

Categories:

Srijan Technologies: Make your Travel Business a Global Phenomenon with Drupal

Planet Drupal - Wed, 2019/06/12 - 3:16pm

Tour and travel business has started to catch up in the digital realm. In fact, it’s growing faster than the total travel market. It is predicted that by 2020, the overall tours and activities segment will grow to $183 billion.

A clear opportunity for businesses in the travel industry.

Categories:

InternetDevels: Artificial intelligence and Drupal 8: amazing opportunities & useful modules

Planet Drupal - Wed, 2019/06/12 - 2:30pm

It’s exciting to see how once unimaginable things become popular digital practices! A vivid example is artificial intelligence. We have shared with you an article about artificial intelligence coming to your apps thanks to cognitive services. What about Drupal websites — are they ready for AI? The answer is a definite yes! Let’s see how artificial intelligence and Drupal 8 come together.

Read more
Categories:

OSTraining: How to Integrate Telegram Chat With Drupal 8

Planet Drupal - Wed, 2019/06/12 - 8:29am

Telegram is an easy to use free chat application that is rapidly winning fans all over the world. 

There is a Telegram plugin for WordPress but there is not yet a Telegram module for Drupal.

In this tutorial, you will learn how to integrate the Telegram app with your Drupal 8 site using JavaScript from Re:plain.

Categories:

Gizra.com: Tools with Friendly Learning Curve: ddev

Planet Drupal - Wed, 2019/06/12 - 7:00am

Some years ago, a frontend developer colleague mentioned that we should introduce SASS, as it requires almost no preparation to start using it. Then as we progress, we could use more and more of it. He proved to be right. A couple of months ago, our CTO, Amitai made a similar move. He suggested to use ddev as part of rebuilding our starter kit for a Drupal 8 project. I had the same feeling, even though I did not know all the details about the tool. But it felt right introducing it and it was quickly evident that it would be beneficial.

Here’s the story of our affair with it.

For You

After the installation, a friendly command-line wizard (ddev config) asks you a few questions:

The configuration wizard holds your hand

It gives you an almost a perfect configuration, and in the .ddev directory, you can overview the YAML files. In .ddev/config.yaml, pay attention to router_http_port and router_https_port, these ports should be free, but the default port numbers are almost certainly occupied by local Nginx or Apache on your development system already.

After the configuration, ddev start creates the Docker containers you need, nicely pre-configured according to the selection. Even if your site was installed previously, you’ll be faced with the installation process when you try to access the URL as the database inside the container is empty, so you can install there (again) by hand.

You have a site inside ddev, congratulations!

For All of Your Coworkers

So now ddev serves the full stack under your site, but is it ready for teamwork? Not yet.

You probably have your own automation that bootstraps the local development environment (site installation, specific configurations, theme compilation, just to name a few), now it’s time to integrate that into ddev.

The config.yaml provides various directives to hook into the key processes.

A basic Drupal 8 example in our case looks like this:

hooks: pre-start: - exec-host: "composer install" post-start: # Install Drupal after start - exec: "drush site-install custom_profile -y --db-url=mysql://db:db@db/db --account-pass=admin --existing-config" - exec: "composer global require drupal/coder:^8.3.1" - exec: "composer global require dealerdirect/phpcodesniffer-composer-installer" post-import-db: # Sanitize email addresses - exec: "drush sqlq \"UPDATE users_field_data SET mail = concat(mail, '.test') WHERE uid > 0\"" # Enable the environment indicator module - exec: "drush en -y environment_indicator" # Clear the cache, revert the config - exec: "drush cr" - exec: "drush cim -y" - exec: "drush entup -y" - exec: "drush cr" # Index content - exec: "drush search-api:clear" - exec: "drush search-api:index"

After the container is up and running, you might like to automate the installation. In some projects, that’s just the dependencies and the site installation, but sometimes you need additional steps, like theme compilation.

In a development team, you will probably have a dev, stage and a live environment that you would like to routinely sync to local to debug and more. In this case, there are integrations with hosting providers, so all you need to do is a ddev pull and a short configuration in .ddev/import.yaml:

provider: pantheon site: client-project environment: test

After the files and database are in sync, everything in post-import-db will be applied, so we can drop the existing scripts we had for this purpose.

We still prefer to have a shell script wrapper in front of ddev, so we have even more freedom to tweak the things and keep it automated. Most notably, ./install does a regular ddev start, which results in a fresh installation, but ./install -p saves the time of a full install if you would like to get a copy on a Pantheon environment.

For the Automated Testing

Now that the team is happy with the new tool, they might be faced with some issues, but for us it wasn’t a blocker. The next step is to make sure that the CI also uses the same environment. Before doing that, you should think about whether it’s more important to try to match the production environment or to make Travis really easily debuggable. If you execute realistic, browser-based tests, you might want to go with the first option and leave ddev out of the testing flow; but for us, it was a desirable to spin an identical site on local to what’s inside Travis. And unlike our old custom Docker image, the maintenance of the image is solved.

Here’s our shell script that spins up a Drupal site in Travis:

#!/usr/bin/env bash set -e # Load helper functionality. source ci-scripts/helper_functions.sh # -------------------------------------------------- # # Installing ddev dependencies. # -------------------------------------------------- # print_message "Install Docker Compose." sudo rm /usr/local/bin/docker-compose curl -s -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" > docker-compose chmod +x docker-compose sudo mv docker-compose /usr/local/bin print_message "Upgrade Docker." sudo apt -q update -y sudo apt -q install --only-upgrade docker-ce -y # -------------------------------------------------- # # Installing ddev. # -------------------------------------------------- # print_message "Install ddev." curl -s -L https://raw.githubusercontent.com/drud/ddev/master/scripts/install_ddev.sh | bash # -------------------------------------------------- # # Configuring ddev. # -------------------------------------------------- # print_message "Configuring ddev." mkdir ~/.ddev cp "$ROOT_DIR/ci-scripts/global_config.yaml" ~/.ddev/ # -------------------------------------------------- # # Installing Profile. # -------------------------------------------------- # print_message "Install Drupal." ddev auth-pantheon "$PANTHEON_KEY" cd "$ROOT_DIR"/drupal || exit 1 if [[ -n "$TEST_WEBDRIVERIO" ]]; then # As we pull the DB always for WDIO, here we make sure we do not do a fresh # install on Travis. cp "$ROOT_DIR"/ci-scripts/ddev.config.travis.yaml "$ROOT_DIR"/drupal/.ddev/config.travis.yaml # Configures the ddev pull with Pantheon environment data. cp "$ROOT_DIR"/ci-scripts/ddev_import.yaml "$ROOT_DIR"/drupal/.ddev/import.yaml fi ddev start check_last_command if [[ -n "$TEST_WEBDRIVERIO" ]]; then ddev pull -y fi check_last_command

As you see, we even rely on the hosting provider integration, but of course that’s optional. All you need to do after setting up the dependencies and the configuration is to ddev start, then you can launch the tests of any kind.

All the custom bash functions above are adapted from https://github.com/Gizra/drupal-elm-starter/blob/master/ci-scripts/helper_functions.sh, and we are in the process of having an ironed out starter kit from Drupal 8, needless to say, with ddev.

One key step is to make ddev non-interactive, see global_config.yaml that the script copies:

APIVersion: v1.7.1 omit_containers: [] instrumentation_opt_in: false last_used_version: v1.7.1

So it does not ask about data collection opt-in, as it would break the non-interactive Travis session. If you are interested in using the ddev pull as well, use encrypted environment variables to pass the machine token securely to Travis.

The Icing on the Cake

ddev has a welcoming developer community. We got a quick and meaningful reaction to our first issue, and by the time of writing this blog post, we have an already merged PR to make ddev play nicely with Drupal-based webservices out of the box. Contributing to this project is definitely rewarding – there are 48 contributors and it’s growing.

The Scene of the Local Development Environments

Why ddev? Why not the most popular choice, Lando or Drupal VM? For us, the main reasons were the Pantheon integration and the pace of development. It definitely has the momentum. In 2018, it was the 13th choice for local development environment amongst Drupal developers; in 2019, it’s at the 9th place according to the 2019 Drupal Local Development survey. This is what you sense when you try to contribute: the open and the active state of the project. What’s for sure, based on the survey, is that nowadays the Docker-based environments are the most popular. And with a frontend that hides all the pain of working with pure Docker/docker-compose commands, it’s clear why. Try it (again), these days - you can really forget the hassle and enjoy the benefits!

Continue reading…

Categories:

Freelock : Layout Builders versus Content Management - are you making this mistake?

Planet Drupal - Wed, 2019/06/12 - 2:27am
Layout Builders versus Content Management - are you making this mistake? John Locke Tue, 06/11/2019 - 17:27

Glitzy websites are all the rage these days. Everybody seems to be looking for easy ways to create multimedia-rich pages with ease. Yet there is a big downside to the current trend of page builders -- if you're not careful, you might end up making your long term content management far harder than it should be.

content management Drupal Drupal Planet Layout Builder Website Content Management WordPress
Categories:

Hook 42: Drupal Core Initiative Meetings Recap - May 2019

Planet Drupal - Tue, 2019/06/11 - 10:18pm
Drupal Core Initiative Meetings Recap - May 2019 Hook 42 Tue, 06/11/2019 - 20:18
Categories: