clemens-tolboom commented on pull request hechoendrupal/drupal-console-core#364

On github - Wed, 2019/08/21 - 8:13am
clemens-tolboom commented on pull request hechoendrupal/drupal-console-core#364 Aug 21, 2019 clemens-tolboom commented Aug 21, 2019

Ai ... I forgot to mention installing latest version of drupal/console (I did a git pull)

clemens-tolboom commented on pull request hechoendrupal/drupal-console-core#364

On github - Wed, 2019/08/21 - 8:11am
clemens-tolboom commented on pull request hechoendrupal/drupal-console-core#364 Aug 21, 2019 clemens-tolboom commented Aug 21, 2019

Is there something we could to fix the issue in composer/installer of drupal-composer/drupal-project? @enzolutions I've tested this commit. Works…

Web Wash: Using Code Generators in Drupal 8

Planet Drupal - Wed, 2019/08/21 - 7:00am

Code generators in Drupal are great as a productivity tool. If you need to create a module, you could easily run a few commands and have a module generated. Then if you need to create a custom block, you could run another command which will generate all the boilerplate code and add the block into a module.

If you want to create a new event subscriber, form, service, etc… There’s always a bit of boilerplate code required to get things going. For example, making sure you extend the right class and inject the correct services. A code generator makes this process quick and easy.

Most of the popular frameworks, Laravel, Symfony, Rails just to name a few, utilize code generators which create scaffolding code.

In this tutorial, you’ll learn three ways you can generate code in Drupal 8 using Drupal Console, Drush and Module Builder.

Categories:

Agaric Collective: Migrating XML files into Drupal

Planet Drupal - Wed, 2019/08/21 - 4:18am

Today we will learn how to migrate content from a XML file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. We will also talk about the difference between two data parsers provided the module. The example includes node, images, and paragraphs migrations. Let’s get started.

Note: Migrate Plus has many more features. For example, it contains source plugins to import from JSON files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing XML files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD XML source migration whose machine name is ud_migrations_xml_source. It comes with four migrations: udm_xml_source_paragraph, udm_xml_source_image, udm_xml_source_node_local, and udm_xml_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain XML migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from XML. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

<?xml version="1.0" encoding="UTF-8" ?> 1 Michele Metts P01 B10 ... ... B10 The definite guide to Drupal 7 Benjamin Melançon et al. ... ... P01 https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg 240 351 ... ...

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a XML file

In any migration project, understanding the source is very important. For XML migrations, there are two major considerations. First, where in the XML tree hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. You use an XPath expression to select a set of nodes from the XML document. In this article, the term element when referring to an XML document node to distinguish it from a Drupal node.  Second, when you get to the set of elements that you want to import, what child elements are going to be made available to the migration. It is possible that each element contains more data than needed. In XML imports, you have to manually include the child elements that will be required for the migration. The following code snippet shows part of the local XML file relevant to the node migration:

<?xml version="1.0" encoding="UTF-8" ?> 1 Michele Metts P01 B10 ... ...

The set of elements containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each element returned by the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local XML file for the node migration:

source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to xml. The urls configuration contains an array of file paths relative to the Drupal root. In the example we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

Technical note: Migrate Plus provides two data parser plugins for XML files. xml uses XMLReader while simple_xml uses SimpleXML. The parser to use is configured in the data_parser_plugin configuration. Also note that when you use the xml parser, the data_fetcher_plugin setting is ignored. More details below.

The item_selector configuration indicates where in the XML file lies the set of elements to be migrated. Its value is an XPath expression used to traverse the file hierarchy. In this case, the value is /data/udm_people. Verify that your expression is valid and select the elements you intend to import. It is common that it starts with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the subtree specified by the item_selector configuration. In the example, the fields are direct children of the elements to migrate. Therefore, the XPath expression only includes the element name (e.g., unique_id). If you had nested elements, you could use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_xml_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_xml_source_image - udm_xml_source_paragraph optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two XML migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a XML file

Let’s consider an example where the elements to migrate have many levels of nesting. The following snippets show part of the local XML file and source plugin configuration for the paragraph migration:

<?xml version="1.0" encoding="UTF-8" ?> B10 The Definitive Guide to Drupal 7 Benjamin Melançon et al. ... ... source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_book_paragraph fields: - name: src_book_id label: 'Book ID' selector: book_id - name: src_book_title label: 'Title' selector: book_details/title - name: src_book_author label: 'Author' selector: book_details/author ids: src_book_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Particularly, the book_details element has two children: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy could be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the XML file is having two children. paragraph_id would contain the unique identifier for the record. paragraph_data would contain a child element to specify the paragraph type. It would also have an arbitrary number of extra child elements with the data to be migrated. In the process section, you would iterate over the children to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process: field_ud_book_paragraph_title: src_book_title field_ud_book_paragraph_author: src_book_authorMigrating images from a XML file

Let’s consider an example where the elements to migrate have more data than needed. The following snippets show part of the local XML file and source plugin configuration for the image migration:

  P01 https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg       240       351             ...         ...   source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_photos fields: - name: src_photo_id label: 'Photo ID' selector: photo_id - name: src_photo_url label: 'Photo URL' selector: photo_url ids: src_photo_id: type: string

 

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: src_photo_url

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the elements with image data have extra children that are not used in the migration. Particularly, the photo_dimensions element has two children representing the width and height of the image. To ignore this subtree, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/width and photo_dimensions/height, respectively.

XML file location

Important: What is described in this section only applies when you use either (1) the xml data parser or (2) the simple_xml parser with the file data fetcher.

When using the file data fetcher plugin, you have three options to indicate the location to the XML files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/xml_files/example.xml.
  • Use an absolute path pointing to the XML location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/xml_files/example.xml.
  • Use a fully-qualified URL to any built-in wrapper like http, https, ftp, ftps, etc. For example, https://understanddrupal.com/xml-files/example.xml.
  • Use a custom stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

Migrating remote XML files

Important: What is described in this section only applies when you use the http data fetcher plugin.

Migrate Plus provides another data fetcher plugin named http. Under the hood, it uses the Guzzle HTTP Client library. You can use it to fetch files using any protocol supported by curl like http, https, ftp, ftps, sftp, etc. In a future blog post we will explain this data fetcher in more detail. For now, the udm_xml_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin, data_parser_plugin, and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote XML file for the node migration:

source: plugin: url data_fetcher_plugin: http # 'simple_xml' is configured to be able to use the 'http' fetcher. data_parser_plugin: simple_xml urls: - https://sendeyo.com/up/d/478f835718 item_selector: /data/udm_people fields: ... ids: ...

And that is how you can use XML files as the source of your migrations. Many more configurations are possible when you use the simple_xml parser with the http fetcher. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

XMLReader vs SimpleXML in Drupal migrations

As noted in the module’s README file, the xml parser plugin uses the XMLReader interface to incrementally parse XML files. The reader acts as a cursor going forward on the document stream and stopping at each node on the way. This should be used for XML sources which are potentially very large. On the other than, the simple_xml parser plugin uses the SimpleXML interface to fully parse XML files. This should be used for XML sources where you need to be able to use complex XPath expressions for your item selectors, or have to access elements outside of the current item element via XPath.

What did you learn in today’s blog post? Have you migrated from XML files before? If so, what challenges have you found? Did you know that you can read local and remote files? Did you know that the data_fetcher_plugin configuration is ignored when using the xml data parser? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether it is the migration series or other topics.

Next: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Mediacurrent: Open Waters Podcast Ep. 3: Improving Drupal's Admin UI With Cristina Chumillas

Planet Drupal - Tue, 2019/08/20 - 9:36pm

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.


Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas
  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?
Quick-takes
  •  Cristina contributed to the Out Of The Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 


That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Categories:

Mediacurrent: Open Waters Podcast Ep. 3: Improving Drupal's Admin UI With Cristina Chumillas

Planet Drupal - Tue, 2019/08/20 - 9:36pm

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.



Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas
  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?
Quick-takes
  •  Currently contributing to The Out of the Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 

That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Categories:

Fuse Interactive: What does Drupal 7 End of Life mean for your business?

Planet Drupal - Tue, 2019/08/20 - 8:08pm
What does Drupal 7 End of Life mean for your business? It's been a great run, but it will soon be time to say goodbye to an old friend. Over the last 8+ years, Drupal 7 has served our clients well. During that time we're thankful to have worked on 100+ Drupal 7 websites for some great organizations from non-profits, to telecoms. While we have been building all our projects on Drupal 8 for the last couple of years, Drupal 7 has continued to be a stable and effective business tool for many of our clients. In an announcement by Dries Buytaert at Drupal Europe (September 2018), Drupal 7 (and 8) will reach End of Life in November 2021 while Drupal 9 is scheduled to be released in 2020. In this post we hope to answer some of the questions you may have as Drupal 7 or 8 site owners / managers regarding the implications of this End of Life date. Greg Gillingham Tue, 08/20/2019 - 11:08
Categories:

Specbee: Drupal Community: It takes a village to build a world-class CMS. See what they have to say.

Planet Drupal - Tue, 2019/08/20 - 7:49am
Drupal Community: It takes a village to build a world-class CMS. See what they have to say. Shefali Shetty 20 Aug, 2019 Top 10 best practices for designing a perfect UX for your mobile app Behind great software lies good code. And behind good code lies a group of passionate individuals with a common drive of making a difference.

It is no mystery why Drupal has been the chosen one for over a million diverse organizations all across the globe. Unsurprisingly, the reason behind the success of this open-source software is the devoted Drupal community. A diverse group of individuals who relentlessly work towards making Drupal stronger and more powerful every single day! To them, Drupal isn’t just a web CMS platform - Drupal is a Religion. A religion that unites everyone who believe that giving back is the only way to move forward. Where contributing to the Drupal project gives them meaning and purpose.

Recently, I had the privilege of interacting with a few of the most decorated and remarkable members of the Drupal community - who also happen to be Drupal’s top contributors. I questioned them about the reason(s) behind them contributing to Drupal and what do they do to make a difference. Their responses were incredible, honest and unfeigned.

Adrian Cid Almaguer Senior Drupal Developer. Acquia Certified Grand Master - Drupal 8

I use Drupal every day and my career in the last years are focused to it, so I want to work with something that I feel comfortable and that meets my needs. If I find errors or something that can be done in a better way in projects I´m using or in the Drupal Core, I open an issue in the project queue and if I have the knowledge and the time, I create a patch for it. This is a way I can says THANKS to the Drupal community.

The strength of Drupal is the community and the contributes modules you can use to create your project, one person can’t create and maintain all the modules you will need, but if several of us give ourselves the task of doing it, all will be more easy, and is not just code, we need documentation, we need examples, translations and many other things in the community, the only way to do this is if each of the Drupal user give at least a small contribution to the community. So, when I contribute to Drupal, I’m helping you to have time to contribute to something that I may need in the future.

I maintain many Drupal modules, so basically the main contributions are create, update and migrate Drupal modules, but I contribute too in other areas. I contribute translating Drupal to the Spanish language and moderating the user translations, I create patches for some projects I do not maintain, sometimes I review some patches in the issue queue, I write and update modules documentation, I make some contributions creating tests for Drupal modules, I give support to the community in the Slack channels and in the Drupal Stack-exchange site and help new contributors to learn how to contribute projects to Drupal in the correct way. And as I’m a former teacher, I participate in regional Drupal events promoting how and why is important to contribute to Drupal projects and how to do it.

I will love to maintain a Drupal core module but I don’t know if I will have the time to do it, so for the moment I will continue migrating to Drupal 8, evolving and having up to date the modules I maintain.

Alex Moreno Technical Architect at Acquia

Contributing to open source is not just a good and healthy habit for the communities. It is also a healthy habit for your own projects and your self-improvement. Contributing validates your knowledge opening your knowledge to everyone else. So you can get feedback that helps yourself to improve, and also ensures that your project is taking the right direction. For example when patching other contributed modules with fixes or improvements.

I enjoy writing code. My main contributions have been always on that direction. Although more recently I have been also helping on other tasks, like Spanish translations in Drupal 8 Umami.

Baddy Sonja Breidert Co-Founder of 1xINTERNET

One of the reasons why I contribute to Drupal is to make Drupal more known in my area, get more people involved, attract new users, etc. I do my bit in contributing to the Drupal project by organising events like Drupal Europe and Drupal Camps in Germany and Iceland.

It is extremely gratifying to see new people from all over the world join the Drupal community - be it as developers, designers, volunteers, event organisers, testers or for example writing documentation. There are so many different ways to contribute!

And what happens over and over again is that people originally come for a very specific purpose, say a project they want to launch, and then stay in the community just because it is such a friendly, diverse and welcoming place! My work in the board of the Drupal Association confirms the old slogan over and over again: Come for the code, stay for the community!

Daniel Wehner Senior Drupal Engineer at Times Higher Education

Unlike many other projects the Drupal community tries to create a sustainable environment. Both from the technical site, but probably on the long run more important from the community side. Initiatives like Drupal Diversity & Inclusion lead the foundation for a project which won't just go away like many others

> Jacob Rockowitz Drupal developer. Built and maintains the Webform module for Drupal 8

Contributing to open source software provides me with an endless collaborative challenge. My professional livelihood is tied to the success of Drupal which inspires me to give something back to the Drupal community. Contributing to Drupal also provides me with an intellectual and social hobby where I get to interact with new people every day.

Everyone has a personal groove/style for building software. After 20 years of writing software, I have come to accept that I like working towards a single goal/project, which is the Webform module for Drupal 8. At the same time, I also have learned that building open source software is more than just contributing code; it is about supporting and creating a community around the code. Supporting the Drupal community has led to also write documentation, blog about Drupal, Webform, and sustainability, present at conferences, and address the bigger picture around building and maintaining software

Joel Pittet Web Coder. Drupal 8 Theme System Co-maintainer

I feel that I should give back to ensure the tools I use keep working. Monetarily or with my time. And with Drupal it’s a bit of both:

I started submitting patches for the Twig initiative for Drupal core, then mentoring and talks at DrupalCons and camps, followed by some contrib patches, then offered to co-maintain some commerce modules, which snowballed into more and more contrib module co-maintaining, mostly for ones I use at work.

I pay the Drupal Association individual membership to help the teams for all the Drupal.orgwork and event work they do.

Joachim Noreiko Freelance Drupal developer. Built and Maintains Drupal Code Builder

I guess, I like fixing stuff, I like to code a bit in my spare time, I like to contribute to Drupal, and as a freelancer, it’s good to be visible in the community.

Lately I’ve actually been feeling a bit demotivated. I’ve been contributing to core a bit, but it’s always an uphill struggle getting beyond an initial patch. I maintain a few contrib modules, and my Drupal Code Builder tool as well.

Joris Vercammen (borisson) Drupal developer, Search API + Facets

Being able to pull so many awesome modules for free really makes the work we all do in building good solutions for our customers a lot easier. This system doesn’t work without some of us putting things (code/time/blogposts/…) back into it. The Drupal community has given me a lot of things unrelated to just the software as well (really awesome friends, a better job, the ability to travel all over Europe, etc.). To enable others that come after me to have a similar experience, I think that it is important to give back, as long as it fits in the schedule.

Most of my contributions are under the form of code. I try to do some mentoring but while that is a lot more effective, it is really hard and I’m not that great at it, yet. I’m mostly interested in the Search API ecosystem because that’s what I got roped in to when I started contributing. A lot of my core contributions are for blockers (of blockers of blockers) for things that we need. I try to focus a little bit on the Facets module, since that is what I’m responsible for, but it’s not always easy or the most fun to do. Especially since I’ve still not built a Drupal 8 site with facets on it.

Malabya Open-source evangelist. Drupal Practice Head at Specbee

Community. That’s what motivates me to contribute. The feeling I get when someone uses your code or module or theme is great. Which is a good drive to motivate for more contributions. Drupal being an open-source software, it is where it is just of the contributions by thousands of contributors. So, when we use Drupal it is our responsibility to contribute back to the software to make it even better for a wider reach

Apart from contributing modules, theme & distributions I help in organising local meetups in Bangalore and mentoring new developers to contribute and begin their contribution journey from the root level. This gives me immense pleasure when I can help someone to introduce to the world of Drupal and make them understand about the importance of contributions and community. Going forward, I would definitely strive towards introducing Drupal to students giving them a career choice and bring in more members to the Drupal community.

Nick Wilde Drupal developer at Taoti Creative

My main motivation has always been improving what I use - first OS contribution before my Drupal days was a bug-fix for an abandoned at the time project that was impairing my Modding of TES-III Morrowind ;). I like the challenges and benefits of working in a community. Code reviews both that I've done and those done on my code have been incredibly important to my growth as a developer. I also have used it as a portfolio/career advancement method, although that is important it is only of tertiary importance to me. Seeing a test go green or a getting confirmation that a bug is fixed is incredibly satisfying to me personally. Also, I believe if you use an open source project especially professionally, contributing back is the right thing.

My level of contributions vary a fair bit depending on my personal and professional level of busy, but mostly through contrib module maintenance/patch submissions. Also in the last year or so, I've been getting into a lot more mentorship roles - both in my new company and within the broader community. Restarted my local Drupal meetup and am doing presentations there regularly.

Rachel Norfolk Community Liaison at Drupal Association

Contribution for me is, at least partly, a selfish act. I have learned so much from some of the best people in the industry, simply by following along and helping where I can. I have also built up an amazing network of people who, because they know I help others, are more prepared to help me when I need it. Both code and other ways of contributing. I’m occasionally in the Drupal core issue queues, I help mentor others and I get involved in community issues.

Renato Goncalves Software Engineer at CI&T's Drupal Competence Office ()

My first motivation to contribute to the Drupal community is helping others that have the same requirement as mine. To be honest, I get very happy when someone uses my community code in their projects. I'm glad to know that I'm helping people. When I'm developing a new feature I check if my solution can be useful to other projects and that way I create my code using a generic way. - Usually, I'm the first to reuse the code several times. I think this is important to make Drupal a powerful and collaborative framework. I liked my first experience using the framework because for each requirement of my project, Drupal has a solution. I think contributing to the community is important for that. More and more new people are going to use the framework, and consequently new contributors, and in that way, it becomes increasingly powerful and efficient. An example of this is the Drupal Security Team, where they work hard to ensure that Drupal is a secure framework. I'm making contributions at the same time I delivery projects. Today I write my code in a generic way, that is, the code can be reused in other times. A good example of this model is the Janrain Connect project. This project is official in the community (contrib project) and my team and I w hard using 100% of the generic code, so we can reuse this code on other cases.

When we need to make some improvement in the code, the first point is checking a way to make this improvement using a generic solution. Using this approach we can help our project and help the community. In this way, we are contributing to making an organized and agile framework. The goal is that other people don't need to re-write code. It is a way of transforming the framework into a collaborative model.

Thomas Seidl Drupal developer, “The Search API Guy”

My motivation comes from several sources: First off, I just like programming, and while fixing bugs, writing tests or giving support isn’t always fun, a lot of the time working on my modules is. It’s just one of my hobbies in that regard. Then, with my modules running on more than 100,000 sites (based on the report), there’s both a sense of accomplishment and responsibility – I feel proud in providing functionality for so many sites, and while, as a volunteer, I don’t feel directly responsible for them, I still want to help improve them where I can, take away pain points and ensure they keep running. And lastly, having a popular, well-maintained module is also the base of my business as a freelancer: it not only provides marketing for my abilities, but also the very market of users who want customizations. So, maintaining and improving my modules is also, indirectly, important for my income, even though the vast majority of my contributed work is unpaid.

Apart from participating in coding standards discussions, I almost exclusively contribute by maintaining my modules (and, increasingly rarely, adding new ones) – fixing bugs, adding features, answering support requests, etc. I sometimes also provide patches for other modules, but generally only when I’m paid to do so. (“My modules” being Search API and its add-on modules Database Search, Autocomplete, Saved Searches and, for D7 only, Solr, Pages, Location and Multi-Index Searches.)

And Lastly....

It’s not just brands that have adopted Drupal as their CMS – they are the cream of brands. From NASA to the Emmy Awards. From Harvard University to eBay. From Twitter to the New York State. These brands have various reasons to choose Drupal as their Content Management System. Drupal’s adaptability to any business process, advanced UX and UI capabilities for an interactive and personalized experience, load-time optimization functionalities, easy content authoring and management, high-security standards, the API-first architecture and so much more!

The major reason why Drupal is being accepted and endorsed by more than a million websites today is because Drupal is always ahead of the curve. Especially since Drupal adopted a continuous innovation model wherein updated versions are released every 6-months with seamless upgrade paths. All of this is possible because of the proactive and ever-evolving Drupal community. The goals for their contributions may vary - from optimizing projects for personal/professional success to creating an impact on others or simply to gain more experience. Either way, they are making a difference and taking Drupal to the next level every time they contribute. Thanks to all the contributors who are making Drupal a better place.

I’d like to end with an excerpt from Dries - “It’s really the Drupal community and not so much the software that makes the Drupal project what it is. So fostering the Drupal community is actually more important than just managing the code base.”

Warmly thanking all the mentioned contributors for helping me put this article together.

 

  • Shefali Shetty
  •   |   May 02, 2019
Get Inspired Adrian Cid Almaguer Senior Drupal Developer. Acquia Certified Grand Master - Drupal 8 Alex Moreno Technical Architect at Acquia Baddy Sonja Breidert Co-Founder of 1xINTERNET Daniel Wehner Senior Drupal Engineer at Times Higher Education Jacob Rockowitz Drupal developer. Built & maintains the Webform module. Joel Pittet Web Coder. Drupal 8 Theme System Co-maintainer. Joachim Noreiko Freelance Drupal developer. Built and Maintains Drupal Code Builder. Joris Vercammen (borisson) Drupal developer, Search API + Facets Malabya Open-source evangelist. Drupal Practice Head at Specbee Nick Wilde Drupal developer at Taoti Creative Rachel Norfolk Community Liaison at Drupal Association Renato Goncalves Software Engineer at CI&T's Drupal Competence Office (DCO) Thomas Seidl

Drupal developer, “The Search API Guy”

 

 

Subscribe For Our Newsletter And Stay Updated Subscribe
Categories:

Drupal blog: Low-code and no-code tools continue to drive the web forward

Planet Drupal - Mon, 2019/08/19 - 11:34pm

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product HuntRyan Hoover, said in a blog post: "As creating things on the internet becomes more accessible, more people will become makers."

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster (FTP hand-written HTML files, anyone?) has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only be done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Categories:

Jacob Rockowitz: Requesting a medical appointment online begins a patient's digital journey

Planet Drupal - Mon, 2019/08/19 - 6:59pm

Experience

My experience with healthcare, Drupal, and webforms

For the past 20 years, I have worked in healthcare helping Memorial Sloan Kettering Cancer Center (MSKCC) evolve their digital platform and patient experience. About ten years ago, I persuaded MSKCC to switch to Drupal 6, which was followed by a migration to Drupal 8. More recently, I have become the maintainer of the Webform module for Drupal 8. Now, I want to leverage my experience and expertise in healthcare, webforms, and Drupal, to start exploring how we can improve patient and caregiver’s digital experience related to online appointment requests.

It’s important that we understand the problem/challenge of requesting an appointment online, examine how hospitals are currently solving this problem, and then offer some recommendations and ways to improve existing approaches. Instead of writing one very long blog post, I’m going to break up this discussion into a series of three blog posts. This initial post is going to address the patient journey and experience around an appointment request form.

These blog posts are not Drupal-specific, but my goal is to create and share an exemplary "Request an appointment" form template for the Webform module for Drupal 8.

Improving patient and caregiver’s digital experience

Improving the patient and caregiver digital experience is a very broad, massive, and challenging topic. Personally, my goal when working with doctors, researcher, and caregivers is…

Making things "easy" for patients and caregivers in healthcare is easier said...Read More

Categories:

Agaric Collective: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

Planet Drupal - Mon, 2019/08/19 - 4:45pm

In the previous two blog posts, we learned to migrate data from JSON and XML files. We presented to configure the migrations to fetch remote files. In today's blog post, we will learn how to add HTTP request headers and authentication to the request. . For HTTP authentication, you need to choose among three options: Basic, Digest, and OAuth2. To provide this functionality, the Migrate API leverages the Guzzle HTTP Client library. Usage requirements and limitations will be presented. Let's begin.

Migrate Plus architecture for remote data fetching

The Migrate Plus module provides an extensible architecture for importing remote files. It makes use of different plugin types to fetch file, add HTTP authentication to the request, and parse the response. The following is an overview of the different plugins and how they work together to allow code and configuration reuse.

Source plugin

The url source plugin is at the core of the implementation. Its purpose is to retrieve data from a list of URLs. Ingrained in the system is the goal to separate the file fetching from the file parsing. The url plugin will delegate both tasks to other plugin types provided by Migrate Plus.

Data fetcher plugins

For file fetching, you have two options. A general-purpose file fetcher for getting files from the local file system or via stream wrappers. This plugin has been explained in detail on the posts about JSON and XML migrations. Because it supports stream wrapper, this plugin is very useful to fetch files from different locations and over different protocols. But it has two major downsides. First, it does not allow setting custom HTTP headers nor authentication parameters. Second, this fetcher is completely ignored if used with the xml or soap data parser (see below).

The second fetcher plugin is http. Under the hood, it uses the Guzzle HTTP Client library. This plugin allows you to define a headers configuration. You can set it to a list of HTTP headers to send along with the request. It also allows you to use authentication plugins (see below). The downside is that you cannot use stream wrappers. Only protocols supported by curl can be used: http, https, ftp, ftps, sftp, etc.

Data parsers plugins

Data parsers are responsible for processing the files considering their type: JSON, XML, or SOAP. These plugins let you select a subtree within the file hierarchy that contains the elements to be imported. Each record might contain more data than what you need for the migration. So, you make a second selection to manually indicate which elements will be made available to the migration. Migrate plus provides four data parses, but only two use the data fetcher plugins. Here is a summary:

  • json can use any of the data fetchers. Offers an extra configuration option called include_raw_data. When set to true, in addition to all the fields manually defined, a new one is attached to the source with the name raw. This contains a copy of the full object currently being processed.
  • simple_xml can use any data fetcher. It uses the SimpleXML class.
  • xml does not use any of the data fetchers. It uses the XMLReader class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.
  • xml does not use any data fetcher. It uses the SoapClient class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.

The difference between xml and simple_xml were presented in the previous article.

Authentication plugins

These plugins add authentication headers to the request. If correct, you could fetch data from protected resources. They work exclusively with the http data fetcher. Therefore, you can use them only with json and simple_xml data parsers. To do that, you set an authentication configuration whose value can be one of the following:

  • basic for HTTP Basic authentication.
  • digest for HTTP Digest authentication.
  • oauth2 for OAuth2 authentication over HTTP.

Below are examples for JSON and XML imports with HTTP headers and authentication configured. The code snippets do not contain real migrations. You can also find them in the ud_migrations_http_headers_authentication directory of the demo repository https://github.com/dinarcon/ud_migrations.

Important: The examples are shown for reference only. Do not store any sensitive data in plain text or commit it to the repository.

JSON and XML Drupal migrations with HTTP request headers and Basic authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the basic authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: basic username: totally password: insecure fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: pageJSON and XML Drupal migrations with HTTP request headers and Digest authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept: 'application/json; charset=utf-8' Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the digest authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: digest username: totally password: insecure fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: pageJSON and XML Drupal migrations with HTTP request headers and OAuth2 authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept: 'application/json; charset=utf-8' Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the oauth2 authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: oauth2 grant_type: client_credentials base_uri: https://understanddrupal.com token_url: /oauth2/token client_id: some_client_id client_secret: totally_insecure_secret fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: page

What did you learn in today’s blog post? Did you know the configuration names for adding HTTP request headers and authentication to your JSON and XML requests? Did you know that this was limited to the parsers that make use of the http fetcher? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Agiledrop.com Blog: Top 10 Drupal Accessibility Modules

Planet Drupal - Mon, 2019/08/19 - 12:09pm

In this post, we'll take a look at some of the most useful modules that will help make your Drupal site more accessible to developers, content editors and users alike.

READ MORE
Categories:

Dries Buytaert: Low-code and no-code tools continue to drive the web forward

Planet Drupal - Mon, 2019/08/19 - 10:35am

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product Hunt Ryan Hoover said in a blog post: As creating things on the internet becomes more accessible, more people will become makers..

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only by done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Categories:

Liip: How to nail your on-page SEO: A step-by-step guide

Planet Drupal - Mon, 2019/08/19 - 12:00am

On-page SEO is much more than title tags, meta descriptions and valuable content. Here is my actionable guide for digital marketers. I am an SEO Specialist and teamed up with one of my colleagues – a Content Marketing Specialist – for this article. Have fun reading it.

On-page SEO is about creating relevant signals to let search engines know what your page is about. Which improves the website’s ranking in search results.

There are no IT skills needed to implement on-page recommendations as most CMS have an extension for it. For example, if you use WordPress, download the Yoast SEO plugin, or add the Metatag module to Drupal.

On-Page SEO: Hypothetical case study

How to create those relevant signals? Let’s take the example of a florist. StarFlo is located in Lausanne and Zurich, Switzerland. StarFlo has a website in three languages (French, German and English). The flower shop decided to create a specific product page for wedding, in English. A product page is designed to provide information to users about a product and/or a service.

Find relevant keywords with the right search intent

The first step is to define keywords with the highest potential. The goal is to select words, which help to increase the ranking of the wedding product page.
Here are some examples of keywords (non-exhaustive list):

  • “wedding flowers lausanne”
  • “wedding flowers zurich”
  • “wedding table decorations”
  • “wedding bouquet”
  • “rose bouquet bridal”
  • “winter wedding flowers”
  • “wedding floral packages”
  • “orchid wedding bouquet”
  • “wedding flowers shop”

We will take the monthly volume of English keywords in Switzerland into consideration, because we are focusing on a flower shop located in Lausanne and Zurich whose product page is in English.

According to the image below, “wedding table decorations” and “wedding bouquet” have a higher volume (column Search) and a low difficulty score (column KD). Therefore, it could probably make sense to use those keywords. However, you need to investigate further.

If you check Google search results for the keyword “wedding table decorations”, you see a lot of images coming from Pinterest. People who are looking for “wedding table decorations” are looking for ideas and inspiration. As a result, “Wedding table decoration” might be a great blog post topic. As FloStar wants to create a product page, we suggest using “wedding flower shop” as a primary keyword, even if this keyword has a lower volume than “wedding table decorations”. The intent of the people searching “wedding flowers shop” is to buy wedding flowers. The intent of the new product page of FloStar is to sell wedding flowers. Therefore the goal is to align both the intent of the target public and the intent of the product page with this keyword.
Once you have the keywords, optimize the content of the page

On-page SEO structural elements

Title tags, H1, H2, and images are part of the on-page structural elements that communicate with search engines

Title tag best practices: clear and easy to understand

The title tag, is the page title and must contain the keyword in less than 60 characters (600 pixels). Ideally, the title tag is unambiguous and easy to understand. You define the title tag individually for each page.

For example:

Wedding flowers shop in Zurich & Lausanne | StarFlo

You do not need to end your title tag with your brand name. However, it helps to build awareness, even without raising the volume of clicks.

Meta description best practices: a short description with a call to action

The meta description describes the content of a page and appears in the search results. The purpose of the meta description is to help the user choose the right page among the results in Google Search. It must be clear, short and engaging. You have 160 characters at your disposal.

We recommend finishing your meta description with a clear call-to-action. Use a verb to describe what you want your target audience to do.

For example:

StarFlo is a flower shop located in Lausanne & Zurich which designs traditional & modern wedding flower arrangements. See our unique wedding creations.

SEO URL’s best practices

The URL is the address of your website. Its name describes both the content of the page and encompasses the page in the overall site map. The URL should contain the keyword and be short.
The structure of the URL is usually governed by rules in the CMS you are using.
Examples for StarFlo landing page about wedding flowers:
✔︎ https://starflo.ch/wedding-flowers
https://starflo.ch/node/357

Use secondary keywords to reinforce the semantic of your page

Startflow wants to be listed top for “wedding flower shop” and “Lausanne”. You can help this page improve its ranking by also using secondary keywords. Secondary keywords are keywords that relate to your primary keyword.

Ask yourself: what questions are your target audience looking to answer by searching for these keywords? What valuable information can you provide to help them?
Your text content must offer added value for your target audience. To ensure this, create a list of topics. In the case of StarFLo, you can include secondary keywords such as “wedding bouquet” and “wedding table decorations”. It may seem odd that the keyword used as the primary keyword has a lower volume than the secondary keywords, but it makes sense in this context. Because these secondary keywords reinforce the semantic of the page.

In the “wedding bouquet” section, you can give some examples of “Bridesmaid bouquets”, “Bridal bouquets” and “Maid of Honor bouquets”, as well as other services or products related to the proposed bouquets.

SEO H1 & H2 tags best practices: structure the text with several titles

A structured text with titles and subtitles is easier to read. Furthermore, titles support your organic referencing as they are considered strong signals by search engines. Start by defining your titles H1 and H2. Use only one H1. Your titles should be clear and descriptive. Avoid generic or thematic titles.

Here is an example:

  • H1: StarFlo, wedding flower shop specialized in nuptial floral design in Lausanne, Zurich & the surrounding area
  • H2: Outstanding wedding table decorations created by our wedding flower specialist in Lausanne & Zurich
  • H2: Wedding bouquet for the bride in Lausanne & Zurich
  • H2: Best seasonal flowers for your wedding
On-page content best practices: Write a text longer than 300 words

Keep in mind these three key points when you write your text:

  • Anything under 300 words is considered thin content.
  • Make sure that your primary keyword is part of the first 100 words in your text.
  • Structure your text with titles and subtitles to help your readers. Moreover, as said above H1 & H2 are strong signals
Images & videos best practices: Define file names, alt-texts and captions

Search engines don’t scan the content of a video or an image (yet). Search engines scan the content of file names, alt-texts and captions only.
Define a meaningful alt-text for each image and video. The alt-text should include your keyword in the file name. Google can then grasp what the image shows. Remember that you wish the website to load fast, so you may compress images.

SEO Internal linking best practices: create a thematic universe within your website using internal links

When writing your text, try to create links to other pages on your website. You can add links in the text or in teasers to race attention on more (or related) topics.

From a content point of view, when you link pages of your own website, you add value to your target audience as their attention is drawn to other pages of interest. Furthermore, the audience may stay longer on your website. Moreover, creating links gives the search engine a better understanding of the website and creates a thematic universe. Topics within such a universe will be preferred by search engines. Thematic universes help Google determine the importance of a page.

From an SEO point of view, internal linking is very important. Because it implies a transfer of authority between pages. A website with high domain authority will appear higher in the search engine results. Usually, homepages have the highest authority. In the case of StarFLo, you could add a hyperlink that connects the homepage to the wedding page. We also recommend adding hyperlinks between pages. For instance, you are writing about winter wedding flowers on your wedding page, and you have a dedicated page about seasonal bouquets. You could add a hyperlink from the wedding page to the seasonal flower page.

The result: the homepage will transfer the authority to the wedding page and the wedding page to the seasonal flower page. For each transfer of authority, there will be a slight dumping factor. This means that if a page has an authority of 10 when it links to another page, the authority transferred will be for example 8.5.

Outbound links Best practices: add relevant content

Link your content to external sources, when it makes sense. For example, StartFlo provided the floral decorations for a wedding in the Lausanne Cathedral. You can add a link to the website of Lausanne’s Cathedral while mentioning.

Bonus: write SEO-optimized blog posts with strong keywords

After publishing your product page, create more entry points to your website. For example, you can write blog posts about your main subject using powerful keywords.

Answer the needs of your readers

When we did the keyword research for StarFlo, we identified a list of topics connected to the main topic. As a reminder, when we were looking at wedding flowers, we discovered that people were very interested in wedding table decorations. We also noticed that people looked for different kinds of bouquets (types of flowers, etc.). You could, for instance, create a page about winter wedding flowers and use these related keywords on it. This strategy helps to define blog post topics.

On the winter wedding flowers page, you could describe the local flowers available in the winter months, the flowers that go best together, etc.

In this case, each of your pages should focus on a different keyword. If two pages are optimized for the same keyword, they compete with each other.

Prioritize your writing according to your business

Once you have a list of topics, it’s good practice not to start writing all at once. We recommend creating an editorial plan. Be honest with yourself: how many hours per week can you dedicate to writing? How long do you need to write a 500-word article? How long do you need to find or create suitable images?

Start with the strongest keywords and the topic with the highest priority for your business.

Here is an example of prioritization:

  • “Wedding table decoration”
  • “Wedding bouquet”
  • “Winter wedding flowers”
  • “Winter wedding floral packages”

If you start writing in September and the branding guidelines of your shop include ‘local’, ‘sustainable’ and ‘proximity’. You will, therefore, write about “Winter wedding flowers” first.

You decide to focus on:

  • “Winter wedding flowers”
  • “Winter wedding floral packages”

As a wrap-up, we prepared the checklist below for you.

Checklist
  • Main keyword is defined
  • Topic brings value to the target public
  • Meta Description and Title Page are written and contain the keyword
  • URL contains the keyword
  • H1 contains the keyword, at the beginning, if possible
  • Text contains a keyword density of 3%
  • Introduction and last paragraph have a particularly high keyword density
  • File names of photos and videos contain the keyword
  • Alt-Text of photos and videos contain the keyword
  • Photo captions contain the keyword
  • Page contains links to other pages on the site
  • Page contains links to valuable external resources
What’s next

On-page SEO is an important part of SEO. However, it’s not the only aspect. Technical SEO has also a tremendous impact. We work on a hands-on blog post about technical SEO. Reach out to us if you wish to be notified when our guide will be ready! Moreover, don’t miss our next SEO/ content meet-up taking place on the 26th of September. We are going to explain how to perform a keyword research. Contact our content expert if you want to be part of the meet up.

If you want to have a personalized workshop about on-page SEO or just want to increase your ranking on Google contact our SEO team:
for English, German and French.

Categories:

Agaric Collective: Migrating JSON files into Drupal

Planet Drupal - Sun, 2019/08/18 - 3:34pm

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{ "data": { "udm_people": [ { "unique_id": 1, "name": "Michele Metts", "photo_file": "P01", "book_ref": "B10" }, {...}, {...} ], "udm_book_paragraph": [ { "book_id": "B10", "book_details": { "title": "The definite guide to Drupal 7", "author": "Benjamin Melançon et al." } }, {...}, {...} ], "udm_photos": [ { "photo_id": "P01", "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg", "photo_dimensions": [240, 351] }, {...}, {...} ] } }

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{ "data": { "udm_people": [ { "unique_id": 1, "name": "Michele Metts", "photo_file": "P01", "book_ref": "B10" }, {...}, {...} ] } }

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_json_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_json_source_image - udm_json_source_paragraph optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{ "data": { "udm_book_paragraph": [ { "book_id": "B10", "book_details": { "title": "The definite guide to Drupal 7", "author": "Benjamin Melançon et al." } }, {...}, {...} ] } source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_book_paragraph fields: - name: src_book_id label: 'Book ID' selector: book_id - name: src_book_title label: 'Title' selector: book_details/title - name: src_book_author label: 'Author' selector: book_details/author ids: src_book_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process: field_ud_book_paragraph_title: src_book_title field_ud_book_paragraph_author: src_book_authorMigrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{ "data": { "udm_photos": [ { "photo_id": "P01", "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg", "photo_dimensions": [240, 351] }, {...}, {...} ] } } source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_photos fields: - name: src_photo_id label: 'Photo ID' selector: photo_id - name: src_photo_url label: 'Photo URL' selector: photo_url ids: src_photo_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: src_photo_urlJSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.
Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: - https://api.myjson.com/bins/110rcr item_selector: data/udm_people fields: ... ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Agaric Collective: Migrating CSV files into Drupal

Planet Drupal - Sat, 2019/08/17 - 9:38pm

Today we will learn how to migrate content from a Comma-Separated Values (CSV) file into Drupal. We are going to use the latest version of the Migrate Source CSV module which depends on the third-party library league/csv. We will show how configure the source plugin to read files with or without a header row. We will also talk about a new feature that allows you to use stream wrappers to set the file location. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD CSV source migration whose machine name is ud_migrations_csv_source. It comes with three migrations: udm_csv_source_paragraph, udm_csv_source_image, and udm_csv_source_node.

You can get the Migrate Source CSV module is using composer: composer require drupal/migrate_source_csv. This will also download its dependency: the league/csv library. The example assumes you are using 8.x-3.x branch of the module, which requires composer to be installed. If your Drupal site is not composer-based, you can use the 8.x-2.x branch. Continue reading to learn the difference between the two branches.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migration. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON.

Note that you can literally swap migration sources without changing any other part of the migration. This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating CSV files with a header row

In any migration project, understanding the source is very important. For CSV migrations, the primary thing to consider is whether or not the file contains a row of headers. Other things to consider are what characters to use as delimiter, enclosure, and escape character. For now, let’s consider the following CSV file whose first row serves as column headers:

unique_id,name,photo_file,book_ref 1,Michele Metts,P01,B10 2,Benjamin Melançon,P02,B20 3,Stefan Freudenberg,P03,B30

This file will be used in the node migration. The four columns are used as follows:

  • unique_id is the unique identifier for each record in this CSV file.
  • name is the name of a person. This will be used as the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration of the CSV source plugin for the node migration:

source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv ids: [unique_id]

The name of the plugin is csv. Then you define the path pointing to the file itself. In this case, the path is relative to the Drupal root. Finally, you specify an ids array of columns names that would uniquely identify each record. As already stated, the unique_id column servers that purpose. Note that there is no need to specify all the columns names from the CSV file. The plugin will automatically make them available. That is the simplest configuration of the CSV source plugin.

The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_csv_source_image source: photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_csv_source_image - udm_csv_source_paragraph optional: []

Note that the source for the setting the image reference is photo_file. In the process pipeline you can directly use any column name that exists in the CSV file. The configuration of the migration lookup plugin and dependencies point to two CSV migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating CSV files without a header row

Now let’s consider two examples of CSV files that do not have a header row. The following snippets show the example CSV file and source plugin configuration for the paragraph migration:

B10,The definite guide to Drupal 7,Benjamin Melançon et al. B20,Understanding Drupal Views,Carlos Dinarte B30,Understanding Drupal Migrations,Mauricio Dinarte source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_book_paragraph.csv ids: [book_id] header_offset: null fields: - name: book_id - name: book_title - name: 'Book author'

When you do not have a header row, you need to specify two more configuration options. header_offset has to be set to null. fields has to be set to an array where each element represents a column in the CSV file. You include a name for each column following the order in which they appear in the file. The name itself can be arbitrary. If it contained spaces, you need to put quotes (') around it. After that, you set the ids configuration to one or more columns using the names you defined.

In the process section you refer to source columns as usual. You write their name adding quotes if it contained spaces. The following snippet shows how the process section is configured for the paragraph migration:

process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: 'Book author'

The final example will show a slight variation of the previous configuration. The following two snippets show the example CSV file and source plugin configuration for the image migration:

P01,https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg P02,https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg P03,https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv ids: [photo_id] header_offset: null fields: - name: photo_id label: 'Photo ID' - name: photo_url label: 'Photo URL'

For each column defined in the fields configuration, you can optionally set a label. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to source columns. You keep using the column name. You can see this in the value of the ids configuration.

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: photo_url CSV file location

When setting the path configuration you have three options to indicate the CSV file location:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/csv_files/example.csv.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/csv_files/example.csv.
  • Use a stream wrapper. This feature was introduced in the 8.x-3.x branch of the module. Previous versions cannot make use of them.

Being able to use stream wrappers gives you many options for setting the location to the CSV file. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://csv_files/example.csv.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/csv_files/example.csv.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/csv-files/example.csv.
CSV source plugin configuration

The configuration options for the CSV source plugin are very well documented in the source code. They are included here for quick reference:

  • path is required. It contains the path to the CSV file. Starting with the 8.x-3.x branch, stream wrappers are supported.
  • ids is required. It contains an array of column names that uniquely identify each record.
  • header_offset is optional. The index of record to be used as the CSV header and the thereby each record's field name. It defaults to zero (0) because the index is zero-based. For CSV files with no header row the value should be set to null.
  • fields is optional. It contains a nested array of names and labels to use instead of a header row. If set, it will overwrite the column names obtained from header_offset.
  • delimiter is optional. It contains one character column delimiter. It defaults to a comma (,). For example, if your file uses tabs as delimiter, you set this configuration to \t.
  • enclosure is optional. It contains one character used to enclose the column values. Defaults to double quotation marks (").
  • escape is optional. It contains one character used for character escaping in the column values. It defaults to a backslash (****).

Important: The configuration options changed significantly between the 8.x-3.x and 8.x-2.x branches. Refer to this change record for a reference of how to configure the plugin for the 8.x-2.x.

And that is how you can use CSV files as the source of your migrations. Because this is such a common need, it was considered to move the CSV source plugin to Drupal core. The effort is currently on hold and it is unclear if it will materialize during Drupal 8’s lifecycle. The maintainers of the Migrate API are focusing their efforts on other priorities at the moment. You can read this issue to learn about the motivation and context for offering functionality in Drupal core.

Note: The Migrate Spreadsheet module can also be used to migrate data from CSV files. It also supports Microsoft Office Excel and LibreOffice Calc (OpenDocument) files. The module leverages the PhpOffice/PhpSpreadsheet library.

What did you learn in today’s blog post? Have you migrated from CSV files before? Did you know that it is now possible to read files using stream wrappers? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories:

Spinning Code: SC DUG August 2019

Planet Drupal - Fri, 2019/08/16 - 1:00pm

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules
  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).
Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Categories:

clemens-tolboom commented on issue hechoendrupal/drupal-console#4129

On github - Fri, 2019/08/16 - 10:41am
clemens-tolboom commented on issue hechoendrupal/drupal-console#4129 Aug 16, 2019 clemens-tolboom commented Aug 16, 2019

Code is added in df1b40a This is due to not installing https://github.com/hechoendrupal/drupal-console-en while installing drupal console through c…