Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 1 hour 1 min ago

Amazee Labs: Contribution and Client Projects: Part Two

Thu, 2019/08/22 - 1:23pm
The first part of this article described why and how the stakeholders of a project can contribute to Drupal. This developer-oriented article is a summary of the Drupal.org documentation for new code contributors. We will cover: how to work on the issue queue, how to publish a project, and how to approach this process with Drupal 9 in mind. 
Categories:

Grazitti Interactive: Gear Yourself Up, Drupal 9 is Coming!

Thu, 2019/08/22 - 10:12am

Categories:

Agaric Collective: Migrating Microsoft Excel and LibreOffice Calc files into Drupal

Thu, 2019/08/22 - 12:03am

Today we will learn how to migrate content from LibreOffice Calc and Microsoft Excel files into Drupal using the Migrate Spreadsheet module. We will give instructions on getting the module and its dependencies. Then, we will present how to configure the module for spreadsheets with or without a header row. There are two example migrations: images and paragraphs. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The image migration uses a Microsoft Excel file as source. The paragraph migration uses a LibreOffice Calc file as source. The CSV migration is a backup in case the Google Sheet is not available. To execute the last one you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module using composer: composer require drupal/migrate_spreadsheet:^1.0. This module depends on the PHPOffice/PhpSpreadsheet library and many PHP extensions including ext-zip. Check this page for a full list of dependencies. If any required extension is missing the installation will fail. If your Drupal site is not composer-based, you will not be able to use Migrate Spreadsheet, unless you go around a lot of hoops.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Microsoft Excel and LibreOffice Calc migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Understanding the source document and plugin configuration

In any migration project, understanding the source is very important. For Microsoft Excel and LibreOffice Calc migrations, the primary thing to consider is whether or not the file contains a row of headers. Also, a workbook (file) might contain several worksheets (tabs). You can only migrate from one worksheet at a time. The example documents have two worksheets: UD Example Sheet and Do not peek in here. We are going to be working with the first one.

The spreadsheet source plugin exposes seven configuration options. The values to use might change depending on the presence of a header row, but all of them apply for both types of document. Here is a summary of the available configurations:

  • file is required. It stores the path to the document to process. You can use a relative path from the Drupal root, an absolute path, or stream wrappers.
  • worksheet is required. It contains the name of the one worksheet to process.
  • header_row is optional. This number indicates which row containing the headers. Contrary to CSV migrations, the row number is not zero-based. So, set this value to 1 if headers are on the first row, 2 if they are on the second, and so on.
  • origin is optional and defaults to A2. It indicates which non-header cell contains the first value you want to import. It assumes a grid layout and you only need to indicate the position of the top-left cell value.
  • columns is optional. It is the list of columns you want to make available for the migration. In case of files with a header row, use those header values in this list. Otherwise, use the default title for columns: A, B, C, etc. If this setting is missing, the plugin will return all columns. This is not ideal, especially for very large files containing more columns than needed for the migration.
  • row_index_column is optional. This is a special column that contains the row number for each record. This can be used as unique identifier for the records in case your dataset does not provide a suitable value. Exposing this special column in the migration is up to you. If so, you can come up with any name as long as it does not conflict with header row names set in the columns configuration. Important: this is an autogenerated column, not any of the columns that come with your dataset.
  • keys is optional and, if not set, it defaults to the value of row_index_column. It contains an array of column names that uniquely identify each record. For files with a header row, you can use the values set in the columns configuration. Otherwise, use default column titles like A, B, C, etc. In both cases, you can use the row_index_column column if it was set. Each value in the array will contain database storage details for the column.

Note that nowhere in the plugin configuration you specify the file type. The same setup applies for both Microsoft Excel and LibreOffice Calc files. The library will take care of detecting and validating the proper type.

Migrating spreadsheet files with a header row

This example is for the paragraph migration and uses a LibreOffice Calc file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

book_id, book_title, Book author B10, The definitive guide to Drupal 7, Benjamin Melançon et al. B20, Understanding Drupal Views, Carlos Dinarte B30, Understanding Drupal Migrations, Mauricio Dinarte source: plugin: spreadsheet file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods worksheet: 'UD Example Sheet' header_row: 1 origin: A2 columns: - book_id - book_title - 'Book author' row_index_column: 'Document Row Index' keys: book_id: type: string

The name of the plugin is spreadsheet. Then you use the file configuration to indicate the path to the file. In this case, it is relative to the Drupal root. The UD Example Sheet is set as the worksheet to process. Because the first row of the file contains the header rows, then header_row is set to 1 and origin to A2.

Then specify which columns to make available to the migration. In this case, we listed all of them so this setting could have been left unassigned. It is better to get into the habit of being explicit about what you import. If the file were to change and more columns were added, you would not have to update the file to prevent unneeded data to be fetched. The row_index_column is not actually used in the migration, but it is set to show all the configuration options in the example. The values will be 1, 2, 3, etc.  Finally, the keys is set the column that serves as unique identifiers for the records.

The rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows the process and destination sections for the LibreOffice Calc paragraph migration.

process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: 'Book author' destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: ud_book_paragraphMigrating spreadsheet files without a header row

Now let’s consider an example of a spreadsheet file that does not have a header row. This example is for the image migration and uses a Microsoft Excel file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

P01, https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg P02, https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg P03, https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg source: plugin: spreadsheet file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods worksheet: 'UD Example Sheet' header_row: 1 origin: A2 columns: - book_id - book_title - 'Book author' row_index_column: 'Document Row Index' keys: book_id: type: string

The plugin, file, amd worksheet configurations follow the same pattern as the paragraph migration. The difference for files with no header row is reflected in the other parameters. header_row is set to null to indicate the lack of headers and origin is to A1. Because there are no column names to use, you have to use the ones provided by the spreadsheet. In this case, we want to use the first two columns: A and B. Contrary to CSV migrations, the spreadsheet plugin does not allow you to define aliases for unnamed columns. That means that you would have to use A, B in the process section to refer to these columns.

row_index_column is set to null because it will not be used. And finally, in the keys section, we use the A column as the primary key. This might seem like an odd choice. Why use that value if you could use the row_index_column as the unique identifier for each row? If this were an isolated migration, that would be a valid option. But this migration is referenced from the node migration explained in the previous example. The lookup is made based on the values stored in the A column. If we used the index of the row as the unique identifier, we would have to update the other migration or the lookup would fail. In many cases, that is not feasible nor desirable.

Except for the name of the columns, the rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process and destination section for the Microsoft Excel image migration.

process: psf_destination_filename: plugin: callback callable: basename source: B # This is the photo URL column. destination: plugin: 'entity:file'

Refer to this entry to know how to run migrations that depend on others. In this case, you can execute them all by running: drush migrate:import --tag='UD Sheets Source'. And that is how you can use Microsoft Excel and LibreOffice Calc files as the source of your migrations. This example is very interesting because each of the migration uses a different source type. The node migration explained in the previous post uses a Google Sheet. This is a great example of how powerful and flexible the Migrate API is.

What did you learn in today’s blog post? Have you migrated from Microsoft Excel and LibreOffice Calc files before? If so, what challenges have you found? Did you know the source plugin configuration is not dependent on the file type? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Agaric Collective: Migrating Google Sheets into Drupal

Wed, 2019/08/21 - 7:05pm

Today we will learn how to migrate content from Google Sheets into Drupal using the Migrate Google Sheets module. We will give instructions on how to publish them in JSON format to be consumed by the migration. Then, we will talk about some assumptions made by the module to allow easier plugin configurations. Finally, we will present the source plugin configuration for Google Sheets migrations. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The last one is a backup in case the Google Sheet is not available. To execute it you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module and its dependency using composer: composer require drupal/migrate_google_sheets:^1.0'. It depends on Migrate Plus. Installing via composer will get you both modules.  If your Drupal site is not composer-based, you can download them manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Google Sheets migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources. In the next article, two of the migrations will be explained. They read from Microsoft Excel and LibreOffice Calc files.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from Google Sheets

In any migration project, understanding the source is very important. For Google Sheets, there are many details that need your attention. First, the module works on top of Migrate Plus and extends its JSON data parser. In fact, you have to publish your Google Sheet and consume it in JSON format. Second, you need to make the JSON export publicly available. Third, you must understand the JSON format provided by Google Sheets and the assumptions made by the module to configure your fields properly. Specific instructions for Google Sheets migrations will be provided. That being said, everything explained in the JSON migration example is applicable in this case too.

Publishing a Google Sheet in JSON format

Before starting the migration, you need the source from where you will extract the data. For this, create a Google Sheet document. The example will use this one:

https://docs.google.com/spreadsheets/d/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/edit#gid=0

The 1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk value is the worksheet ID which will be used later. Once you are done creating the document, you need to publish it so it can be consumed by the Migrate API. To do this, go to the File menu and then click on Publish to the web. A modal window will appear where you can configure the export. Note that it is possible to publish the Entire document or only some of the worksheets (tabs). The example document has two: UD Example Sheet and Do not peek in here. Make sure that all the worksheets that you need are published or export the entire document. Unless multiple urls are configured, a migration can only import from one worksheet at a time. If you fetch from multiple urls they need to have homogeneous structures. When you click the Publish button, a new URL will be presented. In the example it is:

https://docs.google.com/spreadsheets/d/e/2PACX-1vTy2-CGzsoTBkmvYbolFh0UDWenwd9OCdel55j9Qa37g_earT1vA6y-6phC31Xkj8sTWF0o6mZTM90H/pubhtml

The previous URL will not be used. Publishing a document is a required step, but the URL that you get should be ignored. Note that you do not have to share the document. It is fine that the document is private to you as long as it is published. It is up to you if you want to make it available to Anyone with the link or Public on the web and potentially grant edit or comment access. The Share setting does not affect the migration. The final step is getting the JSON representation of the document. You need to assemble a URL with the following pattern:

http://spreadsheets.google.com/feeds/list/[workbook-id]/[worksheet-index]/public/values?alt=json

Replace the [workbook-id] by worksheet ID mentioned at the beginning of this section, the one that is part of the regular document URL, not the published URL. The worksheet-index is an integer number starting that represents the order in which worksheets appear in the document. Use 1 for the first, 2 for the second, and so on. This means that changing the order of the worksheets will affect your migration. At the very least, you will have to update the path to reflect the new index. In the example migration, the UD Example Sheet worksheet will be used. It appears first in the document so worksheet index is 1. Therefore, the exported JSON will be available at the following URL:

http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json

Understanding the published Google Sheet JSON export

Take a moment to read the JSON export and try to understand its structure. It contains much more data than what you need. The records to be imported can be retrieved using this XPath expression: /feed/entry. You would normally have to assign this value to the item_selector configuration of the Migrate Plus’ JSON data parser. But, because the value is the same for all Google Sheets, the module takes care of this automatically. You do not have to set that configuration in the source section. As for the data cells, have a look at the following code snippet to see how they appear on the export:

{ "feed": { "entry": [ { "gsx$uniqueid": { "$t": "1" }, "gsx$name": { "$t": "One Uno Un" }, "gsx$photo-file": { "$t": "P01" }, "gsx$bookref": { "$t": "B10" } } ] } }

Tip: Firefox includes a built-in JSON document viewer which helps a lot in understanding the structure of the document. If your browser does not include a similar tool out of the box, look for one in their extensions repository. You can also use a file formatter to pretty print the JSON output.

The following is a list of headers as they appear in the Google Sheet compared to how they appear in the JSON export:

  • unique_id appears like gsx$uniqueid.
  • name appears like gsx$name.
  • photo-file appears like gsx$photo-file.
  • Book Ref appears like gsx$bookref.

So, the header name from Google Sheet gets transformed in the JSON export. They get a prefix of gsx$ and the header name is transformed to all lowercase letters with spaces and most special characters removed. On top of this, the actual cell value, that you will eventually import, is in a $t property one level under the header name. Now, you should create a list of fields to migrate using XPath expressions as selectors. For example, for the Book Ref header, the selector would be gsx$bookref/$t. But that is not the way to configure the Google Sheets data parser. The module makes some assumptions to make the selector clearer. So, the gsx$ prefix and /$t hierarchy are assumed. For the selector, you only need to use the transformed name. In this case: uniqueid, name, photo-file, and bookref.

Configuring the Migrate Google Sheets source plugin

With the JSON export of the Google Sheet and the list of transformed header names, you can proceed to configure the plugin. It will be very similar to configuring a remote JSON migration. The following code snippet shows source configuration for the node migration:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: google_sheets urls: 'http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json' fields: - name: src_unique_id label: 'Unique ID' selector: uniqueid - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo-file - name: src_book_ref label: 'Book paragraph ID' selector: bookref ids: src_unique_id: type: integer

You use the url plugin, the http fetcher, and the google_sheets parser. The latter is provided by the module. The urls configuration is set to the exported JSON link. The item_selector is not configured because the /feed/entry value is assumed. The fields are configured as in the JSON migration with the caveat of using the transformed header values for the selector. Finally, you need to set the ids key to a combination of fields that uniquely identify each record.

The rest of the migration is almost identical to the JSON example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process, destination, and dependencies section for the Google Sheets migration.

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_microsoft_excel_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_microsoft_excel_source_image - udm_libreoffice_calc_source_paragraph optional: []

Note that the node migration depends on an image and paragraph migration. They are already available in the example. One uses a Microsoft Excel file as the source while the other a LibreOffice Calc document. Both of these migrations will be explained in the next article. Refer to this entry to know how to run migrations that depend on others. For example, you can run: drush migrate:import --tag='UD Sheets Source'.

What did you learn in today’s blog post? Have you migrated from Google Sheets before? If so, what challenges have you found? Did you know the procedure to export a sheet in JSON format? Did you know that the Migrate Google Sheets module is an extension of Migrate Plus? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Read more and discuss at agaric.coop.

Categories:

Lullabot: Running and testing Drupal 8 migrations in CircleCI

Wed, 2019/08/21 - 3:54pm

This is the second article in a series on Drupal 8 migrations which started with An Overview for Migrating Drupal Sites to 8. In this article, you will see a sample setup of a Drupal 7 to 8 migration where we provide the front and back-end teams with a daily database that has the latest configuration and content changes, plus a means for the migration team to test migrations.

Categories:

Specbee: How to make Interactive Websites and why you need one?

Wed, 2019/08/21 - 1:56pm
How to make Interactive Websites and why you need one? Shefali Shetty 21 Aug, 2019 Top 10 best practices for designing a perfect UX for your mobile app

Do you like people who are warm and friendly or cold and hostile? You’ve got it right! I’m comparing Interactive to Non-interactive (static) websites here. In this increasingly digital generation, it isn’t sufficient to place some content on your website and wait for it to work its magic. Providing a web User experience without interactivity is like opening a store filled with inventory without a salesperson to interact with. 
When you create an interactive website, you are forming a connection with your audience. It propels a two-way communication on a medium where you cannot directly interact with a user. Studies have proven that people are more likely to convert on, return to or recommend websites that are interactive. Drupal CMS offers a wide variety of interactive themes and modules that can be easily adapted to your website and further customized.

What is an interactive website?

Put simply, an interactive website is a website that communicates and allows for interaction with users. And by interaction, we don’t just mean allowing users to “click” and “scroll”. Offering users with content that is amusing, collaborative and engaging is the essential objective of an interactive website. An interactive website design will not just display attractive content, it will exhibit interactive content. Content that will compel users to communicate and deeply engage with the website. 
 

 
Interactive Website Designs Communicate & Engage with users 

 

Why do you need one?

Today, all businesses in the digital market are racing to expand their audience. Most of them, however, forget that increasing traffic is simply not enough. Retaining and engaging users is what converts. Engaging your users should be your prime motive and for this you will first need an interactive business website. 

  • Drives more engagement. Interactive business websites can make your website less boring, thus garnering more action. 
  • Users will spend more time on a website that interacts with them. This increases your conversion rate, decreases bounce rate and can boost the SEO of your website.
  • Develops a more personalized user experience that can result in happy users. 
  • Engaged users are more likely to maintain a long-term relationship with websites.
  • Interactive website designs can create lasting effects in user’s minds. This improves your brand awareness and reach. 
  • Interactive websites encourages users to recommend your website and link back to it.
  • More conversions means you have a better chance in making a sale!
How to make interactive websites? 

Creating an interactive website from scratch is easier and more effective as you envision and plan the customer journey from day one. Nevertheless, if you already have a website that you think is static or needs more interactive website features, it is never too late. The first step is to define your business objectives and then identify various touch points from where you can interact with your customers.
If budgets and timelines are constraints you could also look at HTML5 interactive website templates (not recommended if you need customizations).
There are various interactive website features that can increase user engagement but you should pick the ones that suit your business goals. For example, if you are sell financial services, having an interest calculator in your website can prove to be very useful. Nonetheless, the most essential interactive feature that you just cannot ignore is responsiveness. Users will respond to your website on various devices only when it looks and feels presentable.
So what kind of interactive website features or elements can you utilize for your benefit?

  • Social Media Applications

There is no denying that Social media marketing can give you the visibility like no other marketing programs if done right. Provide your users with an option to like and share your content on social media platforms like LinkedIn Twitter or Facebook. Or just to be able to follow your page. You can also display live feed from your social media page to keep users updated. 

  • Simple Interactive Tools

Offer your users with simple interactive tools like Quizzes, short Games, math tools, tax calculators, etc. connected to your business objectives. Integrating simple software tools that can provide your users with instant results have proven to boost user engagement. 

  • Interactive Page Elements

You can enhance your page elements by adding something interesting and attractive to it. For example, colourful and dynamic hover-states on links or images, on-scroll or on-click loading/animation, navigation with clicks on image stories, and much more. Add videos or animations to say more about your business in an interactive way.

  • Forms and Feedback

Allowing users to get in touch with you via a contact form is a great way to connect with them. Not only does it let you increase your database of leads, it is a nice way of saying “We care”. Feedback forms lets you identify your strengths and weaknesses via the best source – your audience! 

  • Chat Widgets

What’s better than a live person chatting with you, answering all your questions about the products or services being offered?! That’s probably the highest level of interactivity you can offer in an interactive business website. If live chat sounds like too much commitment, you could also opt for Chat bots that can be configured to answer predictive questions.

  • User-generated Content

Letting users add their content on your website is a great way to improve interactivity. This can be done in the form of Comments (in your blogs/articles section), inviting them to write guest posts, submit images or even creating a small discussion Forum.

  • Other interactive website Features

You can get creative with the interactive features you want for your audience but here’s a short list of commonly used interactive elements –

  • Google Maps makes you a more trust-worthy brand and provide a great way to improve interaction especially when they are clickable.
  • Newsletters can keep your users coming back to your website for more updates.
  • Voting and showing them results of previous polls helps increase engagement.
  • Search functionalities eases the user from the pain of navigating through your website.
  • Ratings can be a quick and interactive method of getting instant feedback that can improve your products/services/work.
  • Slideshows offer a great way to engage users and can make them want to keep going to the next image.
     
           Interactive Website Features and Elements 
                                         
Drupal for Interactive Websites

When you build your website with Drupal, you will come across multiple options in the form of modules and features that can instantly turn your static website into an interactive one. With Drupal 8, responsiveness comes out of the box. Which means that you don’t need any additional modules to make your Drupal website look great irrespective of the devices. In addition there are a variety of modules that encourage interactivity like the Search API, Contact forms module, Social Media module, Slideshow module,  SimpleNews (or newsletters) and much more! 

Rapid increase in internet speeds are one of the many reasons that have caused a nightmarish drop in the attention spans of consumers. Couple that with highly competitive digital business owners and your plain-Jane static website could be completely abandoned. To keep your users engaged and engrossed you will need to create an interactive website. A website that compels users to respond, communicate and hang in there for longer. 
Get in touch with expert Drupal developers to help you create interactive websites.

Drupal Planet Shefali ShettyApr 05, 2017 Subscribe For Our Newsletter And Stay Updated Subscribe Shefali ShettyApr 05, 2017 Recent Posts Image How to make Interactive Websites and why you need one? Image AMP It Up! The Why and How of Drupal AMP (And what it can do to your website) Image Setup Responsive Images in Drupal 8 - A Step-by-Step Guide Explore Our Drupal Services TAKE ME THERE Featured Success Stories

Know more about our technology driven approach to recreate the content management workflow for [24]7.ai

link

Find out how we transformed the digital image of world’s largest healthcare provider, an attribute that defined their global presence in the medical world.

link

Develop an internal portal aimed at encouraging sellers at Flipkart to obtain latest insights with respect to a particular domain.

link
Categories:

Srijan Technologies: My Experience with Progressive Decoupled Blocks

Wed, 2019/08/21 - 8:41am

The JS frameworks have changed quite a lot in Drupal especially with API-first concept adding to the scenario. It is only expected that developers are inclined towards learning more about JS and related possibilities.

Categories:

Web Wash: Using Code Generators in Drupal 8

Wed, 2019/08/21 - 7:00am

Code generators in Drupal are great as a productivity tool. If you need to create a module, you could easily run a few commands and have a module generated. Then if you need to create a custom block, you could run another command which will generate all the boilerplate code and add the block into a module.

If you want to create a new event subscriber, form, service, etc… There’s always a bit of boilerplate code required to get things going. For example, making sure you extend the right class and inject the correct services. A code generator makes this process quick and easy.

Most of the popular frameworks, Laravel, Symfony, Rails just to name a few, utilize code generators which create scaffolding code.

In this tutorial, you’ll learn three ways you can generate code in Drupal 8 using Drupal Console, Drush and Module Builder.

Categories:

Agaric Collective: Migrating XML files into Drupal

Wed, 2019/08/21 - 4:18am

Today we will learn how to migrate content from a XML file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. We will also talk about the difference between two data parsers provided the module. The example includes node, images, and paragraphs migrations. Let’s get started.

Note: Migrate Plus has many more features. For example, it contains source plugins to import from JSON files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing XML files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD XML source migration whose machine name is ud_migrations_xml_source. It comes with four migrations: udm_xml_source_paragraph, udm_xml_source_image, udm_xml_source_node_local, and udm_xml_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain XML migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from XML. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

<?xml version="1.0" encoding="UTF-8" ?> 1 Michele Metts P01 B10 ... ... B10 The definite guide to Drupal 7 Benjamin Melançon et al. ... ... P01 https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg 240 351 ... ...

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a XML file

In any migration project, understanding the source is very important. For XML migrations, there are two major considerations. First, where in the XML tree hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. You use an XPath expression to select a set of nodes from the XML document. In this article, the term element when referring to an XML document node to distinguish it from a Drupal node.  Second, when you get to the set of elements that you want to import, what child elements are going to be made available to the migration. It is possible that each element contains more data than needed. In XML imports, you have to manually include the child elements that will be required for the migration. The following code snippet shows part of the local XML file relevant to the node migration:

<?xml version="1.0" encoding="UTF-8" ?> 1 Michele Metts P01 B10 ... ...

The set of elements containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each element returned by the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local XML file for the node migration:

source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to xml. The urls configuration contains an array of file paths relative to the Drupal root. In the example we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

Technical note: Migrate Plus provides two data parser plugins for XML files. xml uses XMLReader while simple_xml uses SimpleXML. The parser to use is configured in the data_parser_plugin configuration. Also note that when you use the xml parser, the data_fetcher_plugin setting is ignored. More details below.

The item_selector configuration indicates where in the XML file lies the set of elements to be migrated. Its value is an XPath expression used to traverse the file hierarchy. In this case, the value is /data/udm_people. Verify that your expression is valid and select the elements you intend to import. It is common that it starts with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the subtree specified by the item_selector configuration. In the example, the fields are direct children of the elements to migrate. Therefore, the XPath expression only includes the element name (e.g., unique_id). If you had nested elements, you could use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_xml_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_xml_source_image - udm_xml_source_paragraph optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two XML migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a XML file

Let’s consider an example where the elements to migrate have many levels of nesting. The following snippets show part of the local XML file and source plugin configuration for the paragraph migration:

<?xml version="1.0" encoding="UTF-8" ?> B10 The Definitive Guide to Drupal 7 Benjamin Melançon et al. ... ... source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_book_paragraph fields: - name: src_book_id label: 'Book ID' selector: book_id - name: src_book_title label: 'Title' selector: book_details/title - name: src_book_author label: 'Author' selector: book_details/author ids: src_book_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Particularly, the book_details element has two children: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy could be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the XML file is having two children. paragraph_id would contain the unique identifier for the record. paragraph_data would contain a child element to specify the paragraph type. It would also have an arbitrary number of extra child elements with the data to be migrated. In the process section, you would iterate over the children to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process: field_ud_book_paragraph_title: src_book_title field_ud_book_paragraph_author: src_book_authorMigrating images from a XML file

Let’s consider an example where the elements to migrate have more data than needed. The following snippets show part of the local XML file and source plugin configuration for the image migration:

  P01 https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg       240       351             ...         ...   source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_photos fields: - name: src_photo_id label: 'Photo ID' selector: photo_id - name: src_photo_url label: 'Photo URL' selector: photo_url ids: src_photo_id: type: string

 

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: src_photo_url

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the elements with image data have extra children that are not used in the migration. Particularly, the photo_dimensions element has two children representing the width and height of the image. To ignore this subtree, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/width and photo_dimensions/height, respectively.

XML file location

Important: What is described in this section only applies when you use either (1) the xml data parser or (2) the simple_xml parser with the file data fetcher.

When using the file data fetcher plugin, you have three options to indicate the location to the XML files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/xml_files/example.xml.
  • Use an absolute path pointing to the XML location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/xml_files/example.xml.
  • Use a fully-qualified URL to any built-in wrapper like http, https, ftp, ftps, etc. For example, https://understanddrupal.com/xml-files/example.xml.
  • Use a custom stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

Migrating remote XML files

Important: What is described in this section only applies when you use the http data fetcher plugin.

Migrate Plus provides another data fetcher plugin named http. Under the hood, it uses the Guzzle HTTP Client library. You can use it to fetch files using any protocol supported by curl like http, https, ftp, ftps, sftp, etc. In a future blog post we will explain this data fetcher in more detail. For now, the udm_xml_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin, data_parser_plugin, and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote XML file for the node migration:

source: plugin: url data_fetcher_plugin: http # 'simple_xml' is configured to be able to use the 'http' fetcher. data_parser_plugin: simple_xml urls: - https://sendeyo.com/up/d/478f835718 item_selector: /data/udm_people fields: ... ids: ...

And that is how you can use XML files as the source of your migrations. Many more configurations are possible when you use the simple_xml parser with the http fetcher. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

XMLReader vs SimpleXML in Drupal migrations

As noted in the module’s README file, the xml parser plugin uses the XMLReader interface to incrementally parse XML files. The reader acts as a cursor going forward on the document stream and stopping at each node on the way. This should be used for XML sources which are potentially very large. On the other than, the simple_xml parser plugin uses the SimpleXML interface to fully parse XML files. This should be used for XML sources where you need to be able to use complex XPath expressions for your item selectors, or have to access elements outside of the current item element via XPath.

What did you learn in today’s blog post? Have you migrated from XML files before? If so, what challenges have you found? Did you know that you can read local and remote files? Did you know that the data_fetcher_plugin configuration is ignored when using the xml data parser? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether it is the migration series or other topics.

Next: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Mediacurrent: Open Waters Podcast Ep. 3: Improving Drupal's Admin UI With Cristina Chumillas

Tue, 2019/08/20 - 9:36pm

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.


Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas
  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?
Quick-takes
  •  Cristina contributed to the Out Of The Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 


That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Categories:

Mediacurrent: Open Waters Podcast Ep. 3: Improving Drupal's Admin UI With Cristina Chumillas

Tue, 2019/08/20 - 9:36pm

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.



Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas
  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?
Quick-takes
  •  Currently contributing to The Out of the Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 

That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Categories:

Fuse Interactive: What does Drupal 7 End of Life mean for your business?

Tue, 2019/08/20 - 8:08pm
What does Drupal 7 End of Life mean for your business? It's been a great run, but it will soon be time to say goodbye to an old friend. Over the last 8+ years, Drupal 7 has served our clients well. During that time we're thankful to have worked on 100+ Drupal 7 websites for some great organizations from non-profits, to telecoms. While we have been building all our projects on Drupal 8 for the last couple of years, Drupal 7 has continued to be a stable and effective business tool for many of our clients. In an announcement by Dries Buytaert at Drupal Europe (September 2018), Drupal 7 (and 8) will reach End of Life in November 2021 while Drupal 9 is scheduled to be released in 2020. In this post we hope to answer some of the questions you may have as Drupal 7 or 8 site owners / managers regarding the implications of this End of Life date. Greg Gillingham Tue, 08/20/2019 - 11:08
Categories:

Specbee: Drupal Community: It takes a village to build a world-class CMS. See what they have to say.

Tue, 2019/08/20 - 7:49am
Drupal Community: It takes a village to build a world-class CMS. See what they have to say. Shefali Shetty 20 Aug, 2019 Top 10 best practices for designing a perfect UX for your mobile app Behind great software lies good code. And behind good code lies a group of passionate individuals with a common drive of making a difference.

It is no mystery why Drupal has been the chosen one for over a million diverse organizations all across the globe. Unsurprisingly, the reason behind the success of this open-source software is the devoted Drupal community. A diverse group of individuals who relentlessly work towards making Drupal stronger and more powerful every single day! To them, Drupal isn’t just a web CMS platform - Drupal is a Religion. A religion that unites everyone who believe that giving back is the only way to move forward. Where contributing to the Drupal project gives them meaning and purpose.

Recently, I had the privilege of interacting with a few of the most decorated and remarkable members of the Drupal community - who also happen to be Drupal’s top contributors. I questioned them about the reason(s) behind them contributing to Drupal and what do they do to make a difference. Their responses were incredible, honest and unfeigned.

Adrian Cid Almaguer Senior Drupal Developer. Acquia Certified Grand Master - Drupal 8

I use Drupal every day and my career in the last years are focused to it, so I want to work with something that I feel comfortable and that meets my needs. If I find errors or something that can be done in a better way in projects I´m using or in the Drupal Core, I open an issue in the project queue and if I have the knowledge and the time, I create a patch for it. This is a way I can says THANKS to the Drupal community.

The strength of Drupal is the community and the contributes modules you can use to create your project, one person can’t create and maintain all the modules you will need, but if several of us give ourselves the task of doing it, all will be more easy, and is not just code, we need documentation, we need examples, translations and many other things in the community, the only way to do this is if each of the Drupal user give at least a small contribution to the community. So, when I contribute to Drupal, I’m helping you to have time to contribute to something that I may need in the future.

I maintain many Drupal modules, so basically the main contributions are create, update and migrate Drupal modules, but I contribute too in other areas. I contribute translating Drupal to the Spanish language and moderating the user translations, I create patches for some projects I do not maintain, sometimes I review some patches in the issue queue, I write and update modules documentation, I make some contributions creating tests for Drupal modules, I give support to the community in the Slack channels and in the Drupal Stack-exchange site and help new contributors to learn how to contribute projects to Drupal in the correct way. And as I’m a former teacher, I participate in regional Drupal events promoting how and why is important to contribute to Drupal projects and how to do it.

I will love to maintain a Drupal core module but I don’t know if I will have the time to do it, so for the moment I will continue migrating to Drupal 8, evolving and having up to date the modules I maintain.

Alex Moreno Technical Architect at Acquia

Contributing to open source is not just a good and healthy habit for the communities. It is also a healthy habit for your own projects and your self-improvement. Contributing validates your knowledge opening your knowledge to everyone else. So you can get feedback that helps yourself to improve, and also ensures that your project is taking the right direction. For example when patching other contributed modules with fixes or improvements.

I enjoy writing code. My main contributions have been always on that direction. Although more recently I have been also helping on other tasks, like Spanish translations in Drupal 8 Umami.

Baddy Sonja Breidert Co-Founder of 1xINTERNET

One of the reasons why I contribute to Drupal is to make Drupal more known in my area, get more people involved, attract new users, etc. I do my bit in contributing to the Drupal project by organising events like Drupal Europe and Drupal Camps in Germany and Iceland.

It is extremely gratifying to see new people from all over the world join the Drupal community - be it as developers, designers, volunteers, event organisers, testers or for example writing documentation. There are so many different ways to contribute!

And what happens over and over again is that people originally come for a very specific purpose, say a project they want to launch, and then stay in the community just because it is such a friendly, diverse and welcoming place! My work in the board of the Drupal Association confirms the old slogan over and over again: Come for the code, stay for the community!

Daniel Wehner Senior Drupal Engineer at Times Higher Education

Unlike many other projects the Drupal community tries to create a sustainable environment. Both from the technical site, but probably on the long run more important from the community side. Initiatives like Drupal Diversity & Inclusion lead the foundation for a project which won't just go away like many others

> Jacob Rockowitz Drupal developer. Built and maintains the Webform module for Drupal 8

Contributing to open source software provides me with an endless collaborative challenge. My professional livelihood is tied to the success of Drupal which inspires me to give something back to the Drupal community. Contributing to Drupal also provides me with an intellectual and social hobby where I get to interact with new people every day.

Everyone has a personal groove/style for building software. After 20 years of writing software, I have come to accept that I like working towards a single goal/project, which is the Webform module for Drupal 8. At the same time, I also have learned that building open source software is more than just contributing code; it is about supporting and creating a community around the code. Supporting the Drupal community has led to also write documentation, blog about Drupal, Webform, and sustainability, present at conferences, and address the bigger picture around building and maintaining software

Joel Pittet Web Coder. Drupal 8 Theme System Co-maintainer

I feel that I should give back to ensure the tools I use keep working. Monetarily or with my time. And with Drupal it’s a bit of both:

I started submitting patches for the Twig initiative for Drupal core, then mentoring and talks at DrupalCons and camps, followed by some contrib patches, then offered to co-maintain some commerce modules, which snowballed into more and more contrib module co-maintaining, mostly for ones I use at work.

I pay the Drupal Association individual membership to help the teams for all the Drupal.orgwork and event work they do.

Joachim Noreiko Freelance Drupal developer. Built and Maintains Drupal Code Builder

I guess, I like fixing stuff, I like to code a bit in my spare time, I like to contribute to Drupal, and as a freelancer, it’s good to be visible in the community.

Lately I’ve actually been feeling a bit demotivated. I’ve been contributing to core a bit, but it’s always an uphill struggle getting beyond an initial patch. I maintain a few contrib modules, and my Drupal Code Builder tool as well.

Joris Vercammen (borisson) Drupal developer, Search API + Facets

Being able to pull so many awesome modules for free really makes the work we all do in building good solutions for our customers a lot easier. This system doesn’t work without some of us putting things (code/time/blogposts/…) back into it. The Drupal community has given me a lot of things unrelated to just the software as well (really awesome friends, a better job, the ability to travel all over Europe, etc.). To enable others that come after me to have a similar experience, I think that it is important to give back, as long as it fits in the schedule.

Most of my contributions are under the form of code. I try to do some mentoring but while that is a lot more effective, it is really hard and I’m not that great at it, yet. I’m mostly interested in the Search API ecosystem because that’s what I got roped in to when I started contributing. A lot of my core contributions are for blockers (of blockers of blockers) for things that we need. I try to focus a little bit on the Facets module, since that is what I’m responsible for, but it’s not always easy or the most fun to do. Especially since I’ve still not built a Drupal 8 site with facets on it.

Malabya Open-source evangelist. Drupal Practice Head at Specbee

Community. That’s what motivates me to contribute. The feeling I get when someone uses your code or module or theme is great. Which is a good drive to motivate for more contributions. Drupal being an open-source software, it is where it is just of the contributions by thousands of contributors. So, when we use Drupal it is our responsibility to contribute back to the software to make it even better for a wider reach

Apart from contributing modules, theme & distributions I help in organising local meetups in Bangalore and mentoring new developers to contribute and begin their contribution journey from the root level. This gives me immense pleasure when I can help someone to introduce to the world of Drupal and make them understand about the importance of contributions and community. Going forward, I would definitely strive towards introducing Drupal to students giving them a career choice and bring in more members to the Drupal community.

Nick Wilde Drupal developer at Taoti Creative

My main motivation has always been improving what I use - first OS contribution before my Drupal days was a bug-fix for an abandoned at the time project that was impairing my Modding of TES-III Morrowind ;). I like the challenges and benefits of working in a community. Code reviews both that I've done and those done on my code have been incredibly important to my growth as a developer. I also have used it as a portfolio/career advancement method, although that is important it is only of tertiary importance to me. Seeing a test go green or a getting confirmation that a bug is fixed is incredibly satisfying to me personally. Also, I believe if you use an open source project especially professionally, contributing back is the right thing.

My level of contributions vary a fair bit depending on my personal and professional level of busy, but mostly through contrib module maintenance/patch submissions. Also in the last year or so, I've been getting into a lot more mentorship roles - both in my new company and within the broader community. Restarted my local Drupal meetup and am doing presentations there regularly.

Rachel Norfolk Community Liaison at Drupal Association

Contribution for me is, at least partly, a selfish act. I have learned so much from some of the best people in the industry, simply by following along and helping where I can. I have also built up an amazing network of people who, because they know I help others, are more prepared to help me when I need it. Both code and other ways of contributing. I’m occasionally in the Drupal core issue queues, I help mentor others and I get involved in community issues.

Renato Goncalves Software Engineer at CI&T's Drupal Competence Office ()

My first motivation to contribute to the Drupal community is helping others that have the same requirement as mine. To be honest, I get very happy when someone uses my community code in their projects. I'm glad to know that I'm helping people. When I'm developing a new feature I check if my solution can be useful to other projects and that way I create my code using a generic way. - Usually, I'm the first to reuse the code several times. I think this is important to make Drupal a powerful and collaborative framework. I liked my first experience using the framework because for each requirement of my project, Drupal has a solution. I think contributing to the community is important for that. More and more new people are going to use the framework, and consequently new contributors, and in that way, it becomes increasingly powerful and efficient. An example of this is the Drupal Security Team, where they work hard to ensure that Drupal is a secure framework. I'm making contributions at the same time I delivery projects. Today I write my code in a generic way, that is, the code can be reused in other times. A good example of this model is the Janrain Connect project. This project is official in the community (contrib project) and my team and I w hard using 100% of the generic code, so we can reuse this code on other cases.

When we need to make some improvement in the code, the first point is checking a way to make this improvement using a generic solution. Using this approach we can help our project and help the community. In this way, we are contributing to making an organized and agile framework. The goal is that other people don't need to re-write code. It is a way of transforming the framework into a collaborative model.

Thomas Seidl Drupal developer, “The Search API Guy”

My motivation comes from several sources: First off, I just like programming, and while fixing bugs, writing tests or giving support isn’t always fun, a lot of the time working on my modules is. It’s just one of my hobbies in that regard. Then, with my modules running on more than 100,000 sites (based on the report), there’s both a sense of accomplishment and responsibility – I feel proud in providing functionality for so many sites, and while, as a volunteer, I don’t feel directly responsible for them, I still want to help improve them where I can, take away pain points and ensure they keep running. And lastly, having a popular, well-maintained module is also the base of my business as a freelancer: it not only provides marketing for my abilities, but also the very market of users who want customizations. So, maintaining and improving my modules is also, indirectly, important for my income, even though the vast majority of my contributed work is unpaid.

Apart from participating in coding standards discussions, I almost exclusively contribute by maintaining my modules (and, increasingly rarely, adding new ones) – fixing bugs, adding features, answering support requests, etc. I sometimes also provide patches for other modules, but generally only when I’m paid to do so. (“My modules” being Search API and its add-on modules Database Search, Autocomplete, Saved Searches and, for D7 only, Solr, Pages, Location and Multi-Index Searches.)

And Lastly....

It’s not just brands that have adopted Drupal as their CMS – they are the cream of brands. From NASA to the Emmy Awards. From Harvard University to eBay. From Twitter to the New York State. These brands have various reasons to choose Drupal as their Content Management System. Drupal’s adaptability to any business process, advanced UX and UI capabilities for an interactive and personalized experience, load-time optimization functionalities, easy content authoring and management, high-security standards, the API-first architecture and so much more!

The major reason why Drupal is being accepted and endorsed by more than a million websites today is because Drupal is always ahead of the curve. Especially since Drupal adopted a continuous innovation model wherein updated versions are released every 6-months with seamless upgrade paths. All of this is possible because of the proactive and ever-evolving Drupal community. The goals for their contributions may vary - from optimizing projects for personal/professional success to creating an impact on others or simply to gain more experience. Either way, they are making a difference and taking Drupal to the next level every time they contribute. Thanks to all the contributors who are making Drupal a better place.

I’d like to end with an excerpt from Dries - “It’s really the Drupal community and not so much the software that makes the Drupal project what it is. So fostering the Drupal community is actually more important than just managing the code base.”

Warmly thanking all the mentioned contributors for helping me put this article together.

 

  • Shefali Shetty
  •   |   May 02, 2019
Get Inspired Adrian Cid Almaguer Senior Drupal Developer. Acquia Certified Grand Master - Drupal 8 Alex Moreno Technical Architect at Acquia Baddy Sonja Breidert Co-Founder of 1xINTERNET Daniel Wehner Senior Drupal Engineer at Times Higher Education Jacob Rockowitz Drupal developer. Built & maintains the Webform module. Joel Pittet Web Coder. Drupal 8 Theme System Co-maintainer. Joachim Noreiko Freelance Drupal developer. Built and Maintains Drupal Code Builder. Joris Vercammen (borisson) Drupal developer, Search API + Facets Malabya Open-source evangelist. Drupal Practice Head at Specbee Nick Wilde Drupal developer at Taoti Creative Rachel Norfolk Community Liaison at Drupal Association Renato Goncalves Software Engineer at CI&T's Drupal Competence Office (DCO) Thomas Seidl

Drupal developer, “The Search API Guy”

 

 

Subscribe For Our Newsletter And Stay Updated Subscribe
Categories:

Drupal blog: Low-code and no-code tools continue to drive the web forward

Mon, 2019/08/19 - 11:34pm

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product HuntRyan Hoover, said in a blog post: "As creating things on the internet becomes more accessible, more people will become makers."

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster (FTP hand-written HTML files, anyone?) has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only be done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Categories:

Jacob Rockowitz: Requesting a medical appointment online begins a patient's digital journey

Mon, 2019/08/19 - 6:59pm

Experience

My experience with healthcare, Drupal, and webforms

For the past 20 years, I have worked in healthcare helping Memorial Sloan Kettering Cancer Center (MSKCC) evolve their digital platform and patient experience. About ten years ago, I persuaded MSKCC to switch to Drupal 6, which was followed by a migration to Drupal 8. More recently, I have become the maintainer of the Webform module for Drupal 8. Now, I want to leverage my experience and expertise in healthcare, webforms, and Drupal, to start exploring how we can improve patient and caregiver’s digital experience related to online appointment requests.

It’s important that we understand the problem/challenge of requesting an appointment online, examine how hospitals are currently solving this problem, and then offer some recommendations and ways to improve existing approaches. Instead of writing one very long blog post, I’m going to break up this discussion into a series of three blog posts. This initial post is going to address the patient journey and experience around an appointment request form.

These blog posts are not Drupal-specific, but my goal is to create and share an exemplary "Request an appointment" form template for the Webform module for Drupal 8.

Improving patient and caregiver’s digital experience

Improving the patient and caregiver digital experience is a very broad, massive, and challenging topic. Personally, my goal when working with doctors, researcher, and caregivers is…

Making things "easy" for patients and caregivers in healthcare is easier said...Read More

Categories:

Agaric Collective: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

Mon, 2019/08/19 - 4:45pm

In the previous two blog posts, we learned to migrate data from JSON and XML files. We presented to configure the migrations to fetch remote files. In today's blog post, we will learn how to add HTTP request headers and authentication to the request. . For HTTP authentication, you need to choose among three options: Basic, Digest, and OAuth2. To provide this functionality, the Migrate API leverages the Guzzle HTTP Client library. Usage requirements and limitations will be presented. Let's begin.

Migrate Plus architecture for remote data fetching

The Migrate Plus module provides an extensible architecture for importing remote files. It makes use of different plugin types to fetch file, add HTTP authentication to the request, and parse the response. The following is an overview of the different plugins and how they work together to allow code and configuration reuse.

Source plugin

The url source plugin is at the core of the implementation. Its purpose is to retrieve data from a list of URLs. Ingrained in the system is the goal to separate the file fetching from the file parsing. The url plugin will delegate both tasks to other plugin types provided by Migrate Plus.

Data fetcher plugins

For file fetching, you have two options. A general-purpose file fetcher for getting files from the local file system or via stream wrappers. This plugin has been explained in detail on the posts about JSON and XML migrations. Because it supports stream wrapper, this plugin is very useful to fetch files from different locations and over different protocols. But it has two major downsides. First, it does not allow setting custom HTTP headers nor authentication parameters. Second, this fetcher is completely ignored if used with the xml or soap data parser (see below).

The second fetcher plugin is http. Under the hood, it uses the Guzzle HTTP Client library. This plugin allows you to define a headers configuration. You can set it to a list of HTTP headers to send along with the request. It also allows you to use authentication plugins (see below). The downside is that you cannot use stream wrappers. Only protocols supported by curl can be used: http, https, ftp, ftps, sftp, etc.

Data parsers plugins

Data parsers are responsible for processing the files considering their type: JSON, XML, or SOAP. These plugins let you select a subtree within the file hierarchy that contains the elements to be imported. Each record might contain more data than what you need for the migration. So, you make a second selection to manually indicate which elements will be made available to the migration. Migrate plus provides four data parses, but only two use the data fetcher plugins. Here is a summary:

  • json can use any of the data fetchers. Offers an extra configuration option called include_raw_data. When set to true, in addition to all the fields manually defined, a new one is attached to the source with the name raw. This contains a copy of the full object currently being processed.
  • simple_xml can use any data fetcher. It uses the SimpleXML class.
  • xml does not use any of the data fetchers. It uses the XMLReader class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.
  • xml does not use any data fetcher. It uses the SoapClient class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.

The difference between xml and simple_xml were presented in the previous article.

Authentication plugins

These plugins add authentication headers to the request. If correct, you could fetch data from protected resources. They work exclusively with the http data fetcher. Therefore, you can use them only with json and simple_xml data parsers. To do that, you set an authentication configuration whose value can be one of the following:

  • basic for HTTP Basic authentication.
  • digest for HTTP Digest authentication.
  • oauth2 for OAuth2 authentication over HTTP.

Below are examples for JSON and XML imports with HTTP headers and authentication configured. The code snippets do not contain real migrations. You can also find them in the ud_migrations_http_headers_authentication directory of the demo repository https://github.com/dinarcon/ud_migrations.

Important: The examples are shown for reference only. Do not store any sensitive data in plain text or commit it to the repository.

JSON and XML Drupal migrations with HTTP request headers and Basic authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the basic authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: basic username: totally password: insecure fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: pageJSON and XML Drupal migrations with HTTP request headers and Digest authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept: 'application/json; charset=utf-8' Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the digest authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: digest username: totally password: insecure fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: pageJSON and XML Drupal migrations with HTTP request headers and OAuth2 authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept: 'application/json; charset=utf-8' Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the oauth2 authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: oauth2 grant_type: client_credentials base_uri: https://understanddrupal.com token_url: /oauth2/token client_id: some_client_id client_secret: totally_insecure_secret fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: page

What did you learn in today’s blog post? Did you know the configuration names for adding HTTP request headers and authentication to your JSON and XML requests? Did you know that this was limited to the parsers that make use of the http fetcher? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Agiledrop.com Blog: Top 10 Drupal Accessibility Modules

Mon, 2019/08/19 - 12:09pm

In this post, we'll take a look at some of the most useful modules that will help make your Drupal site more accessible to developers, content editors and users alike.

READ MORE
Categories:

Dries Buytaert: Low-code and no-code tools continue to drive the web forward

Mon, 2019/08/19 - 10:35am

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product Hunt Ryan Hoover said in a blog post: As creating things on the internet becomes more accessible, more people will become makers..

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only by done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Categories:

Liip: How to nail your on-page SEO: A step-by-step guide

Mon, 2019/08/19 - 12:00am

On-page SEO is much more than title tags, meta descriptions and valuable content. Here is my actionable guide for digital marketers. I am an SEO Specialist and teamed up with one of my colleagues – a Content Marketing Specialist – for this article. Have fun reading it.

On-page SEO is about creating relevant signals to let search engines know what your page is about. Which improves the website’s ranking in search results.

There are no IT skills needed to implement on-page recommendations as most CMS have an extension for it. For example, if you use WordPress, download the Yoast SEO plugin, or add the Metatag module to Drupal.

On-Page SEO: Hypothetical case study

How to create those relevant signals? Let’s take the example of a florist. StarFlo is located in Lausanne and Zurich, Switzerland. StarFlo has a website in three languages (French, German and English). The flower shop decided to create a specific product page for wedding, in English. A product page is designed to provide information to users about a product and/or a service.

Find relevant keywords with the right search intent

The first step is to define keywords with the highest potential. The goal is to select words, which help to increase the ranking of the wedding product page.
Here are some examples of keywords (non-exhaustive list):

  • “wedding flowers lausanne”
  • “wedding flowers zurich”
  • “wedding table decorations”
  • “wedding bouquet”
  • “rose bouquet bridal”
  • “winter wedding flowers”
  • “wedding floral packages”
  • “orchid wedding bouquet”
  • “wedding flowers shop”

We will take the monthly volume of English keywords in Switzerland into consideration, because we are focusing on a flower shop located in Lausanne and Zurich whose product page is in English.

According to the image below, “wedding table decorations” and “wedding bouquet” have a higher volume (column Search) and a low difficulty score (column KD). Therefore, it could probably make sense to use those keywords. However, you need to investigate further.

If you check Google search results for the keyword “wedding table decorations”, you see a lot of images coming from Pinterest. People who are looking for “wedding table decorations” are looking for ideas and inspiration. As a result, “Wedding table decoration” might be a great blog post topic. As FloStar wants to create a product page, we suggest using “wedding flower shop” as a primary keyword, even if this keyword has a lower volume than “wedding table decorations”. The intent of the people searching “wedding flowers shop” is to buy wedding flowers. The intent of the new product page of FloStar is to sell wedding flowers. Therefore the goal is to align both the intent of the target public and the intent of the product page with this keyword.
Once you have the keywords, optimize the content of the page

On-page SEO structural elements

Title tags, H1, H2, and images are part of the on-page structural elements that communicate with search engines

Title tag best practices: clear and easy to understand

The title tag, is the page title and must contain the keyword in less than 60 characters (600 pixels). Ideally, the title tag is unambiguous and easy to understand. You define the title tag individually for each page.

For example:

Wedding flowers shop in Zurich & Lausanne | StarFlo

You do not need to end your title tag with your brand name. However, it helps to build awareness, even without raising the volume of clicks.

Meta description best practices: a short description with a call to action

The meta description describes the content of a page and appears in the search results. The purpose of the meta description is to help the user choose the right page among the results in Google Search. It must be clear, short and engaging. You have 160 characters at your disposal.

We recommend finishing your meta description with a clear call-to-action. Use a verb to describe what you want your target audience to do.

For example:

StarFlo is a flower shop located in Lausanne & Zurich which designs traditional & modern wedding flower arrangements. See our unique wedding creations.

SEO URL’s best practices

The URL is the address of your website. Its name describes both the content of the page and encompasses the page in the overall site map. The URL should contain the keyword and be short.
The structure of the URL is usually governed by rules in the CMS you are using.
Examples for StarFlo landing page about wedding flowers:
✔︎ https://starflo.ch/wedding-flowers
https://starflo.ch/node/357

Use secondary keywords to reinforce the semantic of your page

Startflow wants to be listed top for “wedding flower shop” and “Lausanne”. You can help this page improve its ranking by also using secondary keywords. Secondary keywords are keywords that relate to your primary keyword.

Ask yourself: what questions are your target audience looking to answer by searching for these keywords? What valuable information can you provide to help them?
Your text content must offer added value for your target audience. To ensure this, create a list of topics. In the case of StarFLo, you can include secondary keywords such as “wedding bouquet” and “wedding table decorations”. It may seem odd that the keyword used as the primary keyword has a lower volume than the secondary keywords, but it makes sense in this context. Because these secondary keywords reinforce the semantic of the page.

In the “wedding bouquet” section, you can give some examples of “Bridesmaid bouquets”, “Bridal bouquets” and “Maid of Honor bouquets”, as well as other services or products related to the proposed bouquets.

SEO H1 & H2 tags best practices: structure the text with several titles

A structured text with titles and subtitles is easier to read. Furthermore, titles support your organic referencing as they are considered strong signals by search engines. Start by defining your titles H1 and H2. Use only one H1. Your titles should be clear and descriptive. Avoid generic or thematic titles.

Here is an example:

  • H1: StarFlo, wedding flower shop specialized in nuptial floral design in Lausanne, Zurich & the surrounding area
  • H2: Outstanding wedding table decorations created by our wedding flower specialist in Lausanne & Zurich
  • H2: Wedding bouquet for the bride in Lausanne & Zurich
  • H2: Best seasonal flowers for your wedding
On-page content best practices: Write a text longer than 300 words

Keep in mind these three key points when you write your text:

  • Anything under 300 words is considered thin content.
  • Make sure that your primary keyword is part of the first 100 words in your text.
  • Structure your text with titles and subtitles to help your readers. Moreover, as said above H1 & H2 are strong signals
Images & videos best practices: Define file names, alt-texts and captions

Search engines don’t scan the content of a video or an image (yet). Search engines scan the content of file names, alt-texts and captions only.
Define a meaningful alt-text for each image and video. The alt-text should include your keyword in the file name. Google can then grasp what the image shows. Remember that you wish the website to load fast, so you may compress images.

SEO Internal linking best practices: create a thematic universe within your website using internal links

When writing your text, try to create links to other pages on your website. You can add links in the text or in teasers to race attention on more (or related) topics.

From a content point of view, when you link pages of your own website, you add value to your target audience as their attention is drawn to other pages of interest. Furthermore, the audience may stay longer on your website. Moreover, creating links gives the search engine a better understanding of the website and creates a thematic universe. Topics within such a universe will be preferred by search engines. Thematic universes help Google determine the importance of a page.

From an SEO point of view, internal linking is very important. Because it implies a transfer of authority between pages. A website with high domain authority will appear higher in the search engine results. Usually, homepages have the highest authority. In the case of StarFLo, you could add a hyperlink that connects the homepage to the wedding page. We also recommend adding hyperlinks between pages. For instance, you are writing about winter wedding flowers on your wedding page, and you have a dedicated page about seasonal bouquets. You could add a hyperlink from the wedding page to the seasonal flower page.

The result: the homepage will transfer the authority to the wedding page and the wedding page to the seasonal flower page. For each transfer of authority, there will be a slight dumping factor. This means that if a page has an authority of 10 when it links to another page, the authority transferred will be for example 8.5.

Outbound links Best practices: add relevant content

Link your content to external sources, when it makes sense. For example, StartFlo provided the floral decorations for a wedding in the Lausanne Cathedral. You can add a link to the website of Lausanne’s Cathedral while mentioning.

Bonus: write SEO-optimized blog posts with strong keywords

After publishing your product page, create more entry points to your website. For example, you can write blog posts about your main subject using powerful keywords.

Answer the needs of your readers

When we did the keyword research for StarFlo, we identified a list of topics connected to the main topic. As a reminder, when we were looking at wedding flowers, we discovered that people were very interested in wedding table decorations. We also noticed that people looked for different kinds of bouquets (types of flowers, etc.). You could, for instance, create a page about winter wedding flowers and use these related keywords on it. This strategy helps to define blog post topics.

On the winter wedding flowers page, you could describe the local flowers available in the winter months, the flowers that go best together, etc.

In this case, each of your pages should focus on a different keyword. If two pages are optimized for the same keyword, they compete with each other.

Prioritize your writing according to your business

Once you have a list of topics, it’s good practice not to start writing all at once. We recommend creating an editorial plan. Be honest with yourself: how many hours per week can you dedicate to writing? How long do you need to write a 500-word article? How long do you need to find or create suitable images?

Start with the strongest keywords and the topic with the highest priority for your business.

Here is an example of prioritization:

  • “Wedding table decoration”
  • “Wedding bouquet”
  • “Winter wedding flowers”
  • “Winter wedding floral packages”

If you start writing in September and the branding guidelines of your shop include ‘local’, ‘sustainable’ and ‘proximity’. You will, therefore, write about “Winter wedding flowers” first.

You decide to focus on:

  • “Winter wedding flowers”
  • “Winter wedding floral packages”

As a wrap-up, we prepared the checklist below for you.

Checklist
  • Main keyword is defined
  • Topic brings value to the target public
  • Meta Description and Title Page are written and contain the keyword
  • URL contains the keyword
  • H1 contains the keyword, at the beginning, if possible
  • Text contains a keyword density of 3%
  • Introduction and last paragraph have a particularly high keyword density
  • File names of photos and videos contain the keyword
  • Alt-Text of photos and videos contain the keyword
  • Photo captions contain the keyword
  • Page contains links to other pages on the site
  • Page contains links to valuable external resources
What’s next

On-page SEO is an important part of SEO. However, it’s not the only aspect. Technical SEO has also a tremendous impact. We work on a hands-on blog post about technical SEO. Reach out to us if you wish to be notified when our guide will be ready! Moreover, don’t miss our next SEO/ content meet-up taking place on the 26th of September. We are going to explain how to perform a keyword research. Contact our content expert if you want to be part of the meet up.

If you want to have a personalized workshop about on-page SEO or just want to increase your ranking on Google contact our SEO team:
for English, German and French.

Categories:

Agaric Collective: Migrating JSON files into Drupal

Sun, 2019/08/18 - 3:34pm

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{ "data": { "udm_people": [ { "unique_id": 1, "name": "Michele Metts", "photo_file": "P01", "book_ref": "B10" }, {...}, {...} ], "udm_book_paragraph": [ { "book_id": "B10", "book_details": { "title": "The definite guide to Drupal 7", "author": "Benjamin Melançon et al." } }, {...}, {...} ], "udm_photos": [ { "photo_id": "P01", "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg", "photo_dimensions": [240, 351] }, {...}, {...} ] } }

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{ "data": { "udm_people": [ { "unique_id": 1, "name": "Michele Metts", "photo_file": "P01", "book_ref": "B10" }, {...}, {...} ] } }

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_json_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_json_source_image - udm_json_source_paragraph optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{ "data": { "udm_book_paragraph": [ { "book_id": "B10", "book_details": { "title": "The definite guide to Drupal 7", "author": "Benjamin Melançon et al." } }, {...}, {...} ] } source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_book_paragraph fields: - name: src_book_id label: 'Book ID' selector: book_id - name: src_book_title label: 'Title' selector: book_details/title - name: src_book_author label: 'Author' selector: book_details/author ids: src_book_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process: field_ud_book_paragraph_title: src_book_title field_ud_book_paragraph_author: src_book_authorMigrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{ "data": { "udm_photos": [ { "photo_id": "P01", "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg", "photo_dimensions": [240, 351] }, {...}, {...} ] } } source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_photos fields: - name: src_photo_id label: 'Photo ID' selector: photo_id - name: src_photo_url label: 'Photo URL' selector: photo_url ids: src_photo_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: src_photo_urlJSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.
Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: - https://api.myjson.com/bins/110rcr item_selector: data/udm_people fields: ... ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: