Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 42 min 4 sec ago

OpenSense Labs: The Ultimate Guide To Open Source Strategy

Sat, 2021/09/25 - 3:30pm
The Ultimate Guide To Open Source Strategy Meenakshi Sat, 09/25/2021 - 19:00

In this growing era of technology, it is very important to have a smooth and trouble-free working environment. Today, all companies are enhancing their workflow and management in different ways. Keeping everything in line with the innovations, most of the tech companies are adopting open-source software strategies. Open-source software not only makes your work easier but cost-effective and adaptable for business growth. 

Open-source software is released under an open-source license so that anyone can use it. Popular companies like Github host many open source projects, Linux, Ansible, and Kubernetes are examples of open source projects.  

Likewise, thousands of companies using open source platforms experience a sustainable approach with a large user base. These companies use open-source software tremendously, and it is increasingly shaping enterprise software architectures. The use of open-source software has become imperative for their business growth. In this article, you will discover why companies are using open-source software strategies for their development. 

Why do companies go open source  

Knowledge is meant to be shared and the best way to learn is to teach. Open-source software is a free platform that allows its users to share experiences with others. Sharing your work can lead to better quality and helps to get constructive feedback. One of the best qualities of open source software is that the source code of a piece of software is kept open and free to download, also you can modify and incorporate it into third-party projects which helps to enhance the software itself. And that is why many companies are choosing open-source software strategies. 

Every company either VMware, Dell, Nordstrom, or Home depot, relies on open-source software. It all started with Red Hat, who was an initiator in the open-source software market and investors always had this doubt whether other companies would be significant for open source or not. But we have seen rapid growth in the open-source software market and now an open-source has a good presence in the industry. A lot of big companies have now been putting a lot of emphasis on open source strategies.
Talking about the up-gradation in the field of the open-source software environment, in 2019, Red Hat was taken by IBM for $32 billion, Mulesoft for $6.5 billion after going public, and MongoDB for worth $4 billion. In addition, there was a growing unit of remarkable open-source software companies working their way through the growth stages of their evolution like Confluent, HashiCorp, DataBricks, Kong, Cockroach Labs, and many more in the business world. These changes in the tech industry towards open space have given a lot of preference to open source software strategies and improved its occupancy in the industry.

Source: Boston Consulting Group

Business use of open-source software is rising at a rapid speed. With an increase in the number of users, we can now see a magnificent development in the open-source market. Open Source has become a community where people from different backgrounds work together and it gives companies the flexibility to choose the right staff and save money. Some of the very important reasons for choosing open-source software strategies of these companies are:


With open-source software, one of the major advantages is that you can check and track your code without having to rely on vendors. Which creates a sense of flexibility in workflow. Peer programmers can also help to improve the source code and you can have an open collaboration with them on a project basis. With no vendor lock-in system, you can take your open source code anywhere and use it accordingly. All these perks of open source software give an upper hand to companies to choose open source software strategies. 

Open Source in numbers

Compared to proprietary software where businesses must pay for them, open-source software is available for free in the market. Moreover, in proprietary software, vendors keep source code confidential but in open source software, it is developed publicly. These vendors rely a great deal on the open community. And not only vendors but employees, freelancers, and hobbyist programmers associated with open-source workspace participate in open source projects to get acknowledged for their technical skills. Because this platform gives global recognition as their work is accessible to all. 

Contribution of big companies, vendors, developers, and freelancing programmers helped the global market of open-source software services to increase at an exponential rate. Its flexibility to modify code and free availability is the topmost advantages to widen up its use.


The tremendous growth in the graph of the open-source software market indicates that companies are ready to invest more and more in the open-source space. These researches depict that open source adoption in organizations is expected to grow across all IT segments to accelerate business growth.

Merits and caveats of open source 

The demand for open source software services in the market has seen tremendous growth to accelerate businesses. There are many reasons why the open-source software strategy has become so popular. Let’s have a look: 

  • One of the important features of open-source software systems is that their code can be fully customized and vendors can change portions of the code or add components to alter it for different business needs.
  • Latest technologies like AI and ML use open-source software. Its proficiency and vitality provide an edge, with the community ensuring that applications are developed rapidly.
  • The community utilizes a collaborative approach to software development, which helps drive development. Enterprise-grade open source software faces a lower risk of inactivity because of the community’s involvement.
  • Finding developers is easier in an open-source system as it is supported by a large number of developers of diverse backgrounds. More on diversity, inclusion and equity in open source here.

Nothing is black and white there are greys in everything. For instance, licensing from commercial open source vendors may sometimes be pretty difficult to comprehend. If you see MIT and Apache licenses, they comprise only the barest of requirements vis a vis software redistribution. Hardly anything detailed can be seen in MIT license and apache license puts forth elaborative terms which means that the latter is most often seen with open source projects designed for enterprise-grade deployment. More on open source licensing here. Sometimes organisations find themselves on the crossroads when they have to strike the right chord between reaping the benefits of OSS and recognising the importance of bearing legal liability when anything goes downhill. Also, Open source security has the reputation of being the best in the business but under-funded open source projects can have serious repercussions.

Why create an open source strategy for your business? An open-source software strategy is a collective approach towards innovation. It requires systematic flow and management to achieve its goals. According to Ibrahim Haddad at Samsung, hiring top-tier development talent that already has working knowledge in the area of open-source environments helps businesses to achieve their goals and also guides existing developers. An experienced person can easily supervise newcomers.  Lee Congdon, former CIO of Red Hat, also echoes the same thought stating that “Open source is a pool of space for businesses to attract new talent.” The business imperative

So far we have understood the importance of open-source software strategy in business growth but we need to figure out how to set these strategies to boost efficiency and minimize risk. To start with an idea, the very initial step is to create a plan and here in open source software strategy, one has to start documenting a strategy document. This will help in keeping you out of trouble that can arise from picking up wrong decisions. 

Strategizing your document will benefit you in many ways:

  • It will clearly explain your company’s approach to open source and its use. 
  • Helps in making decisions for multi-departmental organizations and builds a healthy community around inventions. 
  • The strategy document will give a clear-cut idea about the company’s investment and let more leaders and stakeholders be involved. Read more about open source leadership here.
What’s in it for stakeholders

Engaging stakeholders in your strategy is a crucial task. The goal of a company is to produce high-quality, secure, and reliable products. Open source platforms help each and everyone in the organization to grow and achieve their goals with high productivity and with a vision to achieve the desired result. What stakeholders look for is the best output for the business and sustainability in the market. 

One of the best features of open source is its high flexibility in modifying source code. Since it is an open platform, it gives you ample space to make changes in code from anywhere. It encourages developing new features along with replacing or improving existing features.  

Adding further to this, modification and development in source code becomes easy in open source software because you can take help and augment engineering resources from anywhere. Besides its contribution to code modification, the open-source platform is a great asset for cost reduction. It helps with consulting, training, and support costs because there is no exclusive access to the technology. And this helps in attracting third-party developers and contributors to your business. Learn more about open source being recession-free and how it thrives during economic slumps here.

Whether your organization chooses open source software for its high-quality source code, lower costs, flexibility, or because it lets you embrace digital innovation by making you try out disruptive technologies keeps you on the leading edge of technology, it provides a competitive advantage. Therefore, an open-source software strategy requires the collaboration of every member of the organization with the same vision to build an effective open-source environment. 

From the vantage point of Amazon Web Services

Amazon, one of the leading companies in the world today, heads more than 1,200 open source projects on Github. Like many other companies, Amazon Web Services is also making a significant contribution in open-source space and funding open source development specific to customer needs. Why have they invested so much in open source software strategy when it is not an open-source company like Red Hat? Matt Assay, head of open source strategy and marketing at AWS explained self-servicing is their main objective. As well as using a lot of open-source software in its own products, AWS provides promotional credits to open source developers to entice them to use AWS products. 

Their most recent open-source project is Babelfish for Aurora PostgreSQL. It is an open-source translation layer that makes it easy to migrate from Microsoft SQL Server to Amazon Aurora PostgreSQL.  

What to know before forming an open source strategy

We have gathered a lot about open-source software space in numbers and how to initiate the strategy. Now, two important aspects need to be kept in mind. These concepts will help businesses to determine a program framework and you can maximize your open source strategy. Let’s have a look,

 

  • Putting forth a standardised governance in place: A set of norms and guidelines is necessary while working on a project. As open-source software projects grow, contributions from different areas become more complex, if you do not have proper policies and procedures. Proper governance helps in setting up the same guidelines for everyone working on a particular project and reduces security and legal risks. It smoothes the transition when an internal project is open-sourced because then all the developers will work under a certain governance code. 
  • Putting forth a strategy for long-term, sustainable projects: For any business, it is important to have a sustainable approach for long-term relationships. A key driver to measure the success of open source software is that it builds a strategy around companies to take interest in your open source projects. Building a long-term relationship with contributors and partners in open source arises from feature-rich developer communities whose code is productized to get business profit and governance. The ultimate goal is to reinvest back into the project community lifecycle. More on open source sustainability here.
How to create an open source strategy


Do you think your words are sufficient enough to portray your idea? Obviously not, because everything can’t be remembered just by words. You need something handy to recall things. Similarly in an open-source software strategy, a written document will help you to align your goals and objectives. The documented strategy assists everybody in the team to work in sync and helps them to understand the business objectives behind your open source project.

Document your goals and objectives

The most important step is documenting your goals and objectives in a way that can be understood by everyone. At a minimum, your document should explain the company’s approach and what’s the role of open-source software strategy in that. And to determine an approach it is vital to understand your vision towards any project, whether you are ready for open source strategy or not, which open source strategy is helpful for your company and many people you need to contribute to your project. 

Ian Varley, Software Architect at Salesforce shared his thoughts on open source strategy documentation, stating that at Salesforce, we have internal documents that we circulate to our engineering team, providing strategic guidance and encouragement around open source." Determine how your employees can contribute to open source

The key element of any project is the workforce behind it. Determining the right amount of people and talent to shape your strategy is necessary for all businesses. Guiding your employees about how they can consume open source code and what contribution they can make in that. What acceptance, rejection, and exception policies should developers follow? How to manage if code comes into one of your products from a project with a different licensing setup? All these queries need to be kept in mind while producing an open-source strategy document. Learn more about the perks of contributing to open source here.

Decide upon the number of people you need and the skills required

The next step is to decide the strength for your project with the right amount of skills required. You might need to put in a lot of effort to gather the right workforce because in open source software strategy projects a developer has to be comfortable with an ambiguous ecosystem and should have the ability to take criticism and feedback. Not everybody is comfortable working under a public spotlight and that is why it becomes very important for any company to hire only those who have an intent to work in an open-source environment. 

External resources:

Open-source software strategy depends on many external and internal resources and these resources help you to determine the criteria to set your strategy. Talking about the external factors there are a lot of resources that help you flesh out your open source strategy and the majority of them are free. Have a look at some of them: 

  • Linux Foundation offers educational resources
  • Talk Openly Develop Openly practices on running open source strategy programs
  • Open Source Guide by GitHub helps on building open source community
  • Open source policy documentation by Google
  • InnerSource Commons founded by Paypal helps companies to pursue an open-source strategy in exchange 

Their approach can differ according to their needs but a common agenda to contribute towards an open-source environment is the same. You can choose and adapt their practices and strategies that fit your business needs. 

The surprising fact is that Netflix, which is the biggest OTT platform, is a true standout in open source. They have an Open Source Software Centre that contributed many tools and applications to the open-source community

Internal resources:

While these kinds of external resources can give basic direction and fill in as a benchmark for your strategy, internal resources are key in setting your open source business system. Your open-source methodology ought to be custom-made to your interesting plan of action, and individuals inside your own organization are the best wellspring of data. Moreover, you need to incorporate every one of the stakeholders to arrive at an agreement to guarantee that everybody is in total agreement and put resources into seeing the endeavors succeed.

Chalk out policies and guidelines

Now once you understand goals, objectives, and employee contribution, you have to set some policies and guidelines before working on an open-source software strategy project. These bullet points need to be written down properly in your strategy document. 


Incorporate open source program offices

It is important to have a center of all sources so that information becomes easily accessible to all. Companies and corporations mostly set up program offices that act as one-stop shops for open source-related activities. It helps to manage the process and to be a resource for your contributors to get help and guidance. An open-source program office can provide your contributors with a platform to share ideas, collaborate with other developers and spark innovation as well. Coordination between internal activities such as legal, technical, and marketing becomes with the program office. 

Companies with poor open-source software strategies will never find a way to execute a project in an open ecosystem. For them, it becomes very important to foster the workforce with the right governance and supervision to achieve their goals. And that is where open source plays an important role. It helps in finding out the contributors for your project and improves the company’s reputation for both marketing and recruiting purposes. Most importantly it helps to support employees on a great level because of the fact that source code is open and they can ask for help anytime anywhere. 

Measure your success


To summarize, by having a clear strategy you can justify your open-source projects and will be able to achieve the desired goals. Measuring metrics will go a long way to understand the success rates. But the bigger part is you need to understand how you are going to analyze which metrics to use? Eventually, you need to look at your strategies and plans to come up with criteria that will help you measure whether or not you are successful.

At last, If your open source strategy is not giving you results, consider restructuring your strategy. Develop a strategic plan and set your goal, this will help in making your project sustainable. If you’re ignoring changes in the industry and not getting aware of the current software trend then it would create chaos for your business. It becomes very important to understand what this project will do and how it’s going to solve a problem, so just open that README file and look for solutions. Appreciable feedback will help you to analyze the problems and give better solutions. 

Building a culture of psychological safety is necessary for an open-source ecosystem as developers can feel a sense of comfort while working and would not get worried about the results. All these points make open-source software strategy an important element to your business.

blog banner blog image Blog Type Articles Is it a good read ? Off
Categories:

Evolving Web: Drupal 8/9 Migration: Migrating Hierarchical Taxonomy Terms

Fri, 2021/09/24 - 9:25pm

When you migrate relations between two different entities, you usually migrate the target entities first and then the source entities.

But what if the target and source are of the same entity and bundle type? How do you ensure that the target entity gets migrated before the source entity?

Have you run into a chicken-versus-egg problem?

In this article, we'll look at how to migrate hierarchical taxonomy terms to Drupal 8 or Drupal 9 by following along with an example migration. We’ll address the above questions to give you a good understanding of how to implement similar migrations in your projects.

The Drupal 8/9 Migration Tutorial Series

Before We Start

👩‍💻 Get up to speed on the latest version of Drupal! Ask us about the Upgrading to Drupal 9 & Beyond training.

The Problem

We have received these CSV files from our imaginary client:

We need to:

  • Migrate categories including their relationships (parent categories) from categories.csv
  • Migrate articles from articles.csv

Let’s get started.

Setting up the Migrations Create the migration module

We need to create a module for our migrations. In this example, we're naming it migrate_example_hierarchical_terms.

We then need to add the following modules as dependencies in the module declaration:

Create a migration group

To group the migrations, we also need to create a migration group. To do so, we’ll create a fairly simple configuration file so that the group gets created when the module is installed.

The file’s contents should be as follows:

id: hierarchical_terms label: Hierarchical Terms Group source_type: CSVWriting the Migrations

Next thing to do is to write the actual migrations. For our requirements, we need to write two different migrations: one for categories and one for articles. The hierarchy stuff will be handled as part of the categories migration.

Since Drupal 8.1.x, migrations are plugins that should be stored in a migrations folder inside any module. You can still make them configuration entities as part of the migrate_plus module but I personally prefer to follow the core recommendation because it's easier to develop (you can make an edit and just rebuild cache to update it).

Write the category migration

For the categories migration, we're creating a categories.yml file inside the migrations folder. This migration uses CSV as source, so we need to declare it like this:

source: plugin: 'csv' path: 'modules/custom/migrate_example_hierarchical_terms/data/categories.csv' delimiter: ',' enclosure: '"' header_offset: 0 ids: - name fields: 0: name: name label: 'Name' ...

The destination part of the migration is pretty standard so we'll skip it, but you can still look at it in the code.

For the process, the important part is the parent field:

parent: - plugin: migration_lookup migration: categories source: parent - plugin: default_value default_value: 0

We're using the migration_lookup plugin and the current migration (categories) to look for the parent entities. We're allowing stubs to be created (by not setting no_stub: true) so that if a child term is migrated before its parent term, the parent will be created as a stub and it will be completed later with the real data.

We're also defaulting to 0 as parent if no parent is set in the source data. This way, the hierarchy will be preserved when running the migration.

Write the article migration

To migrate the articles, we've created the articles.yml migration file. If you have previous experience with migrations in Drupal 8, it's pretty straightforward. It's also using CSV as a source, so its source section is pretty similar to the one in the categories migration. The destination is set to be the article bundle of the node entity type.

The process section looks like this:

process: title: title body/value: content field_category: - plugin: migration_lookup migration: categories source: category

Title from the CSV file is mapped directly to title in the node. The content column in the CSV file is mapped to the value sub-field of the body field. For field_category, we're also using the migration_lookup plugin to get the categories that we've previously migrated.

We're also setting a dependency to the categories migration to ensure that categories run before articles:

migration_dependencies: required: - categories

Now, everything is in place and we're ready to run the migrations.

Running the Migrations

Given that we have set dependencies, we can instruct Drupal to run the migration group and it will run the migrations in the right order.

To do so, execute drush mim --group=hierarchical_terms. The output will look like this:

[notice] Processed 11 items (7 created, 4 updated, 0 failed, 0 ignored) - done with 'categories' [notice] Processed 10 items (10 created, 0 updated, 0 failed, 0 ignored) - done with 'articles'

Note that the counts for categories are not what you'd expect looking at the data. This is because of the stub creation that happened during the migration. However, if you run drush ms, the output will be as expected:

----------------------------------------------- -------------- -------- ------- ---------- ------------- --------------------- Group Migration ID Status Total Imported Unprocessed Last Imported ----------------------------------------------- -------------- -------- ------- ---------- ------------- --------------------- Hierarchical Terms Group (hierarchical_terms) categories Idle 9 9 0 2020-08-21 19:18:46 Hierarchical Terms Group (hierarchical_terms) articles Idle 10 10 0 2020-08-21 19:18:46 ----------------------------------------------- -------------- -------- ------- ---------- ------------- ---------------------Next Steps + more awesome articles by Evolving Web
Categories:

Evolving Web: Drupal 8/9 Migration: Migrating Files and Images (Part 3)

Fri, 2021/09/24 - 7:00pm

Now that you've completed the migration of academic program nodes as mentioned in part 1 of this series and the migration of taxonomy terms as mentioned in part 2, this article focuses on the third requirement.

We have images for each academic program. The base name of the images are mentioned in the CSV data-source for academic programs. To make things easy, we have only one image per program. 

This article assumes:

  • You have read the first part of this article series on migrating basic data.
  • You are able to write basic entity migrations.
  • You understand how to write multiple process plugins in migrations.

👩‍💻 Get up to speed on Drupal 9! Watch Evolving Web and Pantheon's webinar on Drupal 9 migrations.

The Drupal 8/9 Migration Series

Though the problem might sound complex, the solution is as simple as following two steps.

Step 1: Import images as "file" entities

First we need to create file entities for each file. This is because Drupal treats files as file entities which have their own ID. Then Drupal treats node-file associations as entity references, referring to the file entities with their IDs.

We create the file entities in the migrate_plus.migration.program_image.yml file, but this time, using some other process plugins. We re-use the program.data.csv file to import the files, so the source definition again uses the CSV plugin. We specify the key parameter in source as the column containing file names, ie, Image file. This way, we would be refer to these files in other migrations using their names, eg, engineering.png.

keys: - Image file

Apart from that, we use some constants to refer to source and destination paths for the images.

constants: file_source_uri: public://import/program file_dest_uri: 'public://program/image'

file_source_uri is used to refer to the path from which files are to be read during the import, and file_dest_uri is used to refer to the destination path where files should be copied to. The newly created file entities would refer to files stored in this directory. The public:// URI refers to the files directory inside the site in question. This is where all public files related to the site are stored.

file_source: - plugin: concat delimiter: / source: - constants/file_source_uri - Image file - plugin: urlencode file_dest: - plugin: concat delimiter: / source: - constants/file_dest_uri - Image file - plugin: urlencode

Where do we use these constants? In the process element, we prepare two paths - the file source path (file_source) and the file destination path (file_dest).

  • file_source is obtained by concatenating the file_source_uri with the Image file column which stores the file's basename. Using delimiter: / we tell the migrate module to join the two strings with a / (slash) in between to ensure we have a valid file name. In short, we do file_source_uri . '/' . basename using the concat plugin.
  • file_dest, in a similar way, is file_dest_uri . '/' . basename. This is where we utilize the constants we defined in the source element.

Now, we use the file_source and file_dest paths generated above with plugin: file_copy. The file_copy plugin simply copies the files from the file_source path to the file_dest path. All the steps we did above were just for being able to refer to complete file source and destination paths during the process of copying files. The file gets copied and the uri property gets populated with the destination file path.

uri: plugin: file_copy source: - '@file_source' - '@file_dest'

We also use the existing file names as names of the newly created files. We do this using a direct assignment of the Image file column to the filename property as follows:

filename: Image file

Finally, since the destination of the migration is entity:file, the migrate module would use the filename and uri properties to generate a file entity, thereby generating a unique file ID.

Step 2: Associate files to academic programs

Once the heavy-lifting is done and we have our file entities, we need to put the files to use by associating them to academic programs. To do this, we add processing instructions for file_image in migrate_plus.migration.program_data.yml. Just like we did for taxonomy terms, we tell the migrate module that the Image file column contains a unique file name, which refers to a file entity created during the program_image migration. We assign these file references to the field_image/target_id property as in Drupal 8, file associations are also treated as entity references.

'field_image/target_id': plugin: migration_lookup migration: program_image source: Image file

However, in the data-source for academic program data, we see a column named Image alt as well. Can we migrate these as well? We can! With an additional line of YAML.

'field_image/alt': Image alt

And we are done! If you update the configuration introduced by the c11n_migrate module and run the migration with the command drush config-import --partial --source=sites/sandbox.com/modules/c11n_migrate/config/install -y && drush migrate-import --group=c11n --update -y, you should see the following output::

$ drush mi --group=c11n --update -y Processed 8 items (0 created, 8 updated, 0 failed, 0 ignored) - done with 'program_tags' Processed 4 items (4 created, 0 updated, 0 failed, 0 ignored) - done with 'program_image' Processed 4 items (0 created, 4 updated, 0 failed, 0 ignored) - done with 'program_data'

To make sure that tag data is imported and available during the academic program migration, we specify the program_image migration in the migration_dependencies for the program_data migration. Now, when you run these migrations, the image files get associated to the academic program nodes.

Migrated image visible in UI.

Next steps

This is part three in a series of three articles on migrating data in Drupal 8 and 9. If you missed it, go back and read part one: Migration basics or part two: Migrating taxonomy terms and term references.

+ more awesome articles by Evolving Web
Categories:

Evolving Web: Drupal 8/9 Migration: Migrating Taxonomy Term References (Part 2)

Fri, 2021/09/24 - 6:00pm

If you followed along while we migrated academic program nodes in part 1 of this series, Drupal 8/9 Migration: Migrating Basic Data (Part 1), we'll be continuing with the same example this time around as we learn how to import tags (taxonomy terms) related to those academic programs. This article assumes:

  • You've read the first article in this series on migrating basic data.
  • You're able to write basic entity migrations.
  • You're aware of taxonomy vocabularies/terms and their usage.

👩‍💻 Get up to speed on Drupal 9! Watch Evolving Web and Pantheon's webinar on Drupal 9 migrations.

The Drupal 8/9 Migration Series

Importing tags as taxonomy terms

As a general rule for migrating relations between two entities, first, we need to write a migration for the target entities. In this case, since academic programs have a field named field_tags and we wish to store the IDs of certain taxonomy terms of the vocabulary tags in the program nodes, we need to write a migration to import tags first. Thus, while running the migrations for academic programs, the tags would already exist on the Drupal 8 site and migrate API would be able to refer to these tags using their IDs.

To achieve this, we write a simple migration for the tags as in the migrate_plus.migration.program_tags.yml file. One noteworthy thing about this migration is that we use the tags themselves as the unique ID for the tags. This is because:

  • The tags' data-source, program.tags.csv, does not provide any unique key for the tags.
  • The academic programs' data-source, program.data.csv, refers to the tags using the tag text (instead of unique IDs).

Once the tag data is imported, all we have to do is add some simple lines of YAML in the migration definition for academic programs to tell Drupal how to migrate the field_tags property of academic programs.

As we did for program_level in the previous article, we will be specifying multiple plugins for this property:

field_tags: - plugin: explode delimiter: ', ' source: Tags - plugin: migration migration: program_tags

Here is an explanation of what we are actually doing with the multiple plugins:

  • explode: Taking a look at the data source, we notice that academic programs have multiple tags separated by commas. So, as a first step, we use the explode plugin in order to split (or "explode") the tags by the delimiter , (comma), thereby creating an array of tags. We do this using plugin: explode and delimiter: ', '. We leave a space after the comma because the data source has a space after its commas, meaning that adding that space in the delimiter will allow us to exclude the spaces from the imported tags.
  • migration_lookup: Now that we have an array of tags, each tag identifying itself using its unique tag text, we can tell the migrate module that these tags are the same ones we imported in migrate_plus.migration.program_tags.yml and that the tags generated during that migration are to be used here in order to associate them to the academic programs. We do this using plugin: migration_lookup and migration: program_tags. You can read more about the migration_lookup plugin on Drupal.org.
migration_dependencies: optional: - program_tags # - program_image

To make sure that tag data is imported and available during the academic program migration, we specify the program_tags migration in the migration_dependencies for the program_data migration. Now, when you re-run these migrations, the taxonomy terms get associated with the academic program nodes. At this stage, you can keep the program_image dependency still commented, because we haven't written it yet.

Migrated tags visible in UI.

As simple as it may sound, this is all that is needed to associate the tags to the academic programs! All that is left is re-installing the c11n_migrate module and executing the migrations using the following drush command: drush mi --group=c11n --update. You should see the following output:

$ drush mi --group=c11n --update Processed 8 items (8 created, 0 updated, 0 failed, 0 ignored) - done with 'program_tags' Processed 4 items (0 created, 4 updated, 0 failed, 0 ignored) - done with 'program_data'

Because of the migration_dependencies we specified, the program_tags migration was run before the program_data migration.

Importing terms without a separate data-source

For the sake of demonstration, I also included an alternative approach for the migration of the field_program_type property. For program type, I used the entity_generate plugin which comes with the migrate_plus module. This is how the plugin works:

  • Looks up for an entity of a particular type (in this case, taxonomy_term) and bundle (in this case, program_types) based on a particular property (in this case, name).
  • If no matching entity is found, an entity is created on the fly.
  • The ID of the existing / created entity is returned for use in the migration.
field_program_type: plugin: entity_generate source: Type entity_type: taxonomy_term bundle_key: vid bundle: program_types value_key: name

So, in the process instructions for field_program_type, I use plugin: entity_generate. So, during migration, for every program type, the entity_generate plugin is called and a particular taxonomy term is associated with the academic programs. The disadvantage of using the entity_generate method is when we roll back the migration, these taxonomy terms created during the migration would not be deleted.

Next steps + more awesome articles by Evolving Web
Categories:

Evolving Web: Drupal 8/9 Migration: Migrating Basic Data (Part 1)

Fri, 2021/09/24 - 5:35pm

Usually when a huge site makes the decision to migrate to Drupal, one of the biggest concerns of the site owners is migrating the old site's data into the new Drupal site. The old site might or might not be a Drupal site, but given that the new site is on Drupal, we can make use of the cool migrate module to import data from a variety of data sources including but not limited to XML, JSON, CSV and SQL databases.

👩‍💻 Get up to speed on Drupal 9! Watch Evolving Web and Pantheon's webinar on Drupal 9 migrations.

This article revolves around an example module named c11n_migrate showing how to go about importing basic data from a CSV data source, though things would work pretty similarly for other types of data sources.

The Drupal 8/9 Migration Tutorial Series

The Problem

As per project requirements, we wish to import certain data for an educational and cultural institution.

  • Academic programs: We have a CSV file containing details related to academic programs. We are required to create nodes of type program with the data. This is what we discuss in this article.
  • Tags: We have a CSV file containing details related to tags for these academic programs. We are required to import these as terms of the vocabulary named tags. This will be discussed in a future article.
  • Images: We have images for each academic program. The base name of the images are mentioned in the CSV file for academic programs. To make things easy, we have only one image per program. This will be discussed in a future article.
Executing Migrations

Before we start with actual migrations, a few things to note about running your migrations:

  • Though the basic migration framework is a part of the D8 core as the migrate module, to be able to execute migrations, you must install the migrate_tools module. You can use the command drush migrate-import --all to execute all migrations. In this tutorial, we also install some other modules like migrate_plus, migrate_source_csv.
  • Migration definitions in Drupal 8 are in YAML files, which is great. But the fact that they are located in the config/install directory implies that these YAML files are imported when the module is installed. Hence, any subsequent changes to the YAML files would not be detected until the module is re-installed. We solve this problem by re-importing the relevant configurations manually like drush config-import --partial --source=path/to/module/config/install.
  • While writing a migration, we usually update the migration over and over and re-run them to see how things go. To do this quickly, you can re-import config for the module containing your custom migrations (in this case the c11n_migrate module) and execute the relevant migrations in a single command like drush config-import --partial --source=sites/sandbox.com/modules/c11n_migrate/config/install -y && drush migrate-import --group=c11n --update -y.
  • To execute the migrations in this example, you can download the c11n_migrate module sources and rename the downloaded directory to c11n_migrate. The module should work without any trouble for a standard Drupal 8 install.
The Module

Though a matter of personal preference, I usually prefer to name project-specific custom modules with a prefix of c11n_ (being the numeronym for the word customization). That way, I have a naming convention for custom modules and I can copy any custom module to another site without worrying about having to change prefixes. Very small customizations, can be put into a general module named c11n_base.

To continue, there is nothing fancy about the module definition as such. The c11n_migrate.info.yml file includes basic project definition with certain dependencies on other modules. Though the migrate module is in Drupal 8 core, we need most of these dependencies to enable / enhance migrations on the site:

  • migrate: Without the migrate module, we cannot migrate!
  • migrate_plus: Improves the core migrate module by adding certain functionality like migration groups and usage of YML files to define migrations. Apart from that, this module includes an example module which I referred to on various occasions while writing my example module.
  • migrate_tools: General-purpose drush commands and basic UI for managing migrations.
  • migrate_source_csv: The core migrate module provides a basic framework for migrations, which does not include support for specific data sources. This module makes the migrate module work with CSV data sources.

Apart from that, we have a c11n_migrate.install file to re-position the migration source files in the site's public:// directory. Most of the migration magic takes place in config/install/migrate_plus.* files.

Migration Group

Like we used to implement hook_migrate_api() in Drupal 7 to declare the API version, migration groups, individual migrations and more, in Drupal 8, we do something similar. Instead of implementing a hook, we create a migration group declaration inside the config/install directory of our module. The file must be named something like migrate_plus.migration_group.NAME.yml where NAME is the machine name for the migration group, in this case, migrate_plus.migration_group.c11n.yml.

id: c11n label: Custom migrations description: Custom data migrations. source_type: CSV files dependencies: enforced: module: - c11n_migrate

We create this group to act as a container for all related migrations. As we see in the extract above, the migration group definition defines the following:

  • id: A unique ID for the migration. This is usually the NAME part of the migration group declaration file name as discussed above.
  • label: A human-friendly name of the migration group as it would appear in the UI.
  • description: A brief description about the migration group.
  • source_type: This would appear in the UI to provide a general hint as to where the data for this migration comes from.
  • dependencies: Though this might sound a bit strange to Drupal 7 users, this segment is used to define modules on which the migration depends. When one of these required modules are missing / removed, the migration group is also automatically removed.

Once done, if you install/re-install the c11n_migrate module and visit the admin/structure/migrate page, you should see the migration group we created above!

Migration Definition: Metadata

Now that we have a module to put our migration scripts in and a migration group for grouping them together, it's time we write a basic migration! To get started, we import basic data about academic programs, ignoring complex stuff such as tags, files, etc. In Drupal 7 we used to do this in a file containing a PHP class which used to extend the Migration class provided by the migrate module. In Drupal 8, like many other things, we do this in a YML file, in this case, the migrate_plus.migration.program_data.yml file.

id: program_data label: Academic programs and associated data. migration_group: c11n migration_tags: - academic program - node # migration_dependencies: # optional: # - program_tags # - program_image dependencies: enforced: module: - c11n_migrate

In the above extract, we declare the following metadata about the migration:

  • id: A unique identifier for the migration. In this example, I allocated the ID program_data, hence, the migration declaration file has been named migrate_plus.migration.program_data.yml. We can specifically execute this with the command drush migrate-import ID.
  • label: A human-friendly name of the migration as it would appear in the UI.
  • migration_group: This puts the migration into the migration group c11n we created above. We can execute all migrations in a given group with the command drush migrate-import --group=GROUP.
  • migration_tags: Here we provide multiple tags for the migration and just like groups, we can execute all migrations with the same tag using the command drush migrate-import --tag=TAG
  • dependencies: Just like in case of migration groups, this segment is used to define modules on which the migration depends. When one of these required modules are missing / removed, the migration is automatically removed.
  • migration_dependencies: This element is used to mention IDs of other migrations which must be run before this migration. For example, if we are importing articles and their authors, we need to import author data first so that we can refer to the author's ID while importing the articles. Note that we can leave this undefined / commented for now as we do not have any other migrations defined. I defined this section only after I finished writing the migrations for tags, files, etc.
Migration Definition: Source source: plugin: csv path: 'public://import/program/program.data.csv' header_row_count: 1 keys: - ID fields: ID: Unique identifier for the program as in the data source. Title: Name of the program. Body: A description for the program. Level: Whether the program is for undergraduates or graduates. Type: Whether it is a full-time or a part-time program. Image file: Name of the image file associated with the program. Image alt: Alternate text for the image for accessibility. Tags: Comma-separated strings to use as tags. Fees: We will ignore this field as per requirement.

Once done with the meta-data, we define the source of the migration data with the source element in the YAML.

  • plugin: The plugin responsible for reading the source data. In our case we use the migrate_source_csv module which provides the source plugin csv. There are other modules available for other data sources like JSON, XML, etc.
  • path: Path to the data source file - in this case, the program.data.csv file.
  • header_row_count: This is a plugin-specific parameter which allows us to skip a number of rows from the top of the CSV. I found this parameter reading the plugin class file modules/contrib/migrate_source_csv/src/Plugin/migrate/source/CSV.php, but it is also mentioned in the docs for the migrate_source_csv module.
  • keys: This parameter defines a number of columns in the source data which form a unique key in the source data. Luckily in our case, the program.data.csv provides a unique ID column so things get easy for us in this migration. This unique key will be used by the migrate module to relate records from the source with the records created in our Drupal site. With this relation, the migrate module can interpret changes in the source data and update the relevant data on the site. To execute an update, we use the parameter --update with our drush migrate-import command, for example drush migrate-import --all --update.
  • fields: This parameter provides a description for the various columns available in the CSV data source. These descriptions just appear in the UI and explain purpose behind each column of the CSV.
  • constants: We define certain values which we would be hard-coding into certain properties which do not have relevant columns in the data-source.

Once done, the effect of the source parameter should be visible on the admin/structure/migrate/manage/c11n/migrations/program_data/source page as follows:

Migration Definition: Destination destination: plugin: 'entity:node' default_bundle: program

In comparison to the source definition, the destination definition is much simpler. Here, we need to tell the migrate module how we want it to use the source data. We do this by specifying the following parameters:

  • plugin: Just like source data is handled by separate plugins, we have destination plugins to handle the output of the migrations. In this case, we want Drupal to create node entities with the academic program data, so we use the entity:node plugin.
  • default_bundle: Here, we define the type of nodes we wish to obtain using the migration. Though we can override the bundle for individual item, this parameter provides a default bundle for entities created by this migration. We will be creating only program nodes, so we mention that here.

Provided above is a quick look at the program node fields.

Migration Definition: Mapping and Processing

If you ever wrote a migration in an earlier version of Drupal, you might already know that migrations are usually not as simple as copying data from one column of a CSV file to a given property of the relevant entity. We need to process certain columns and eliminate certain columns and much more. In Drupal 8, we define these processes using a process element in the migration declaration. This is where we put our YAML skills to real use.

process: title: Title sticky: constants/bool_0 promote: constants/bool_1 uid: constants/uid_root 'body/value': Body 'body/format': constants/restricted_html field_program_level: - plugin: callback callable: strtolower source: Level - plugin: default_value default_value: graduate - plugin: static_map map: graduate: gr undergraduate: ug

Here is a quick look at the parameters we just defined:

  • title: An easy property to start with, we just assign the Title column of the CSV as the title property of the node. Though we do not explicitly mention any plugin for this, in the background, Drupal uses the get plugin to handle this property.
  • sticky: Though Drupal can apply the default value for this property if we skip it (like we have skipped the status property), I wanted to demonstrate how to specify a hard-coded value for a property. We use the constant constants/bool_0 to make the imported nodes non-sticky with sticky = 0.
  • promote: Similarly, we ensure that the imported nodes are promoted to the front page by assigning constants/bool_1 for the promote property.
  • uid: Similarly, we specify default owner for the article as the administrative user with uid = 1.
  • body: The body for a node is a filtered long text field and has various sub-properties we can set. So, we copy the Body column from the CSV file to the body/value property (instead of assigning it to just body). In the next line, we specify the body/format property as restricted_html. Similarly, one can also add a custom summary for the nodes using the body/summary property. However, we should keep in mind that while defining these sub-properties, we need to wrap the property name in quotes because we have a / in the property name.
  • field_program_level: With this property I intend to demonstrate a number of things - multiple plugins, the static_map plugin, the callback plugin and the default_value plugin.
    • Here, we have the plugin specifications as usual, but we have small dashes with which we are actually defining an array of plugins or a plugin pipeline. The plugins would be called one by one to transform the source value to a destination value. We specify a source parameter only for the first plugin. For the following plugins, the output of the previous plugin would be used as the input.
    • The source data uses the values graduate/undergraduate with variations in case as Undergraduate or UnderGraduate. With the first plugin, we call the function strtolower (with callback: strtolower) on the Level property (with source: Level) to standardize the source values. After this plugin is done, all Level values would be in lower-case.
    • Now that the values are in lower-case, we face another problem. The Math & Economics row, no Level value is specified. If no value exists for this property, the row would be ignored during migration. As per client's instructions, we can use the default value graduate when a Level is not specified. So, we use the default_value plugin (with plugin: default_value) and assign the value graduate (using default_value: graduate) for rows which do not have a Level. Once this plugin is done, all rows would technically have a value for Level.
    • We notice that the source has the values graduate/undergraduate, whereas the destination field only accepts gr/ug. In Drupal 7, we would have written a few lines of code in a ProgramDataMigration::prepareRow() method, but in Drupal 8, we just write some more YAML. To tackle this, we pass the value through a static_map (with plugin: static_map) and define a map of new values which should be used instead of old values (with the map element). And we are done! Values would automatically be translated to gr or ug and assigned to our program nodes.

With the parameters above, we can write basic migrations with basic data-manipulation. If you wish to see another basic migration, you can take a look at migrate_plus.migration.program_tags.yml. Here is how the migration summary looks once the migration has been executed.

$ drush migrate-import program_data --update Processed 4 items (4 created, 0 updated, 0 failed, 0 ignored) - done with 'program_data'

Once done correctly, the nodes created during the migration should also appear in the content administration page just as expected.

Next Steps + more awesome articles by Evolving Web
Categories:

Droptica: Drupal Security Modules and Best Practices for Your Website

Fri, 2021/09/24 - 11:09am

The security of the solutions we provide is a very important factor for us. Due to this and the fact that Drupal is the safest CMS, in this article, we'll present the list of recommendations that'll take the security of your Drupal website to an even higher level.

Drupal security. Why is it good to stay up to date?

Your application is less susceptible to exploiting known vulnerabilities. That's it. But it means so much more.

As I've mentioned before – updating modules and libraries is one of the simpler methods of improving the security of our application. The Drupal community supported by the special Drupal Security Team constantly monitors the user reports on potential security bugs and offers to help the modules' authors solve them. The result of these actions are module updates that introduce security patches.

Configuration of the login panel

An incorrectly configured login panel may provide information about the existence in the database of a user using the login provided in the form. If the information that the panel returns in the case in which the attacker provided an incorrect login is different than when the login is correct, we're dealing with a brute force attack vector. This way, the attacker may obtain the logins first and then deal with brute-forcing the passwords.

Drupal modules increasing website security

Drupal has several modules that may improve security. Their configuration doesn't require extensive technical knowledge and doesn't take as much time as other methods of securing a website. We present below some tools of this type.

Drupal Password Policy

The Password Policy module allows for enforcing restrictions on the users' passwords by defining password policies. It can be defined by a set of requirements that must be met before a user password change is accepted. Every restriction has a parameter that specifies the minimum number of important conditions that must be fulfilled to meet the requirement.

Let's suppose we're limited to uppercase letters (with parameter 2), as well as limited to numbers (with parameter 4). This means that a user password must contain at least two uppercase letters and at least four numbers to be accepted.

The module also implements the "expiring password" function. The user is forced to change their password and is optionally blocked when their old password expires.

Drupal Password Policy allows administrators to force specific users or entire roles to change their password the next time they log in. The request to change the password, along with the appropriate form, appears as a popup instead of redirecting the user to the typical user/{user_id}/edit page.

Drupal Security Review

The Security Review module automates the testing of many easy-to-make mistakes that cause the website to be unsafe. This Drupal module is intuitive and very easy to use. The quickly-prepared report is legible and clearly indicates what needs to be improved. The module doesn't automatically introduce changes to your page. The results of the report should be analyzed, and – in selected cases – appropriate corrections should be made. Not all recommendations will be acceptable. It all depends on the unique factors of your website.

Drupal Security Kit

The Security Kit module provides a variety of security-enhancing options to help reduce the risk of various vulnerabilities in your application being exploited. The module reduces the likelihood of using many types of attacks, including:

  • cross-site scripting,
  • cross-site request forgery,
  • clickjacking.

The full description of the functionalities can be found in the article linked above.

Source: Drupal.org

Drupal Paranoia

The Paranoia module identifies most places where the user can execute the PHP code using the Drupal interface and then blocks them. This reduces the potential threat resulting from the attacker gaining high-level authorization in Drupal.

What does the module do?

  • Blocks the grant of the use of PHP for block visibility permission.
  • Blocks the ability to create text formats that use the PHP filter.
  • Blocks the ability to edit the user account with uid 1.
  • Blocks granting the permissions that may reduce the website security.
  • Blocks disabling this module. To disable it, you need to edit the database.

In order to take full advantage of this module, you need to identify all the entities, fields, and blocks that use the Drupal PHP filter and change them so that they work without it, and then remove the standard PHP filter available in admin/config/content/formats.

How to create a secure code in Drupal?

Drupal uses the solutions that are assumed to be secure when used according to the standards. There are many rules you need to follow when creating a secure code. We present the most important of them below.

Use Twig

The Twig engine "auto-escapes" all variables by default. This means that all the strings rendered by the Twig templates (e.g., everything between {{ }}) are automatically cleared of the elements that may compromise the security of your application.

When rendering the attributes be sure to embed them between quotation marks " or apostrophes '. For example, class=”{{foo}}”, not class={{foo}}.

Use placeholders

Translation API also cleans up strings. Use it for the strings you want to translate and later, for example, render on the frontend side.

In Drupal, there are three types of placeholders in the Translation API:

@variable

We use it when we want to substitute a string or an object of the MarkupInterface class for a placeholder.

%variable

We use it when we want to embed a value between the tags.

:variable

We use it when the value we want to substitute is the URL we want to embed in the href attribute.

You can find more about placeholders at Drupal.org.

Learn the API and use it

Drupal provides many features for cleaning up strings. Among them are:

t(), Drupal::translation()->formatPlural()

Used along with the placeholders described above, it allows for creating secure strings ready to be translated.

Html::escape()

Used to clean up plain text.

Xss::filterAdmin()

Use it when you want to clean up the text entered by an admin who should be able to use most of the HTML tags and attributes.

UrlHelper::stripDangerousProtocols(), UrlHelper::filterBadProtocol()

Useful for URL checking, can be used together with SafeMarkup::format().

The strings that have passed through the functions t(), Html::escape(), Xss::filter() or Xss::filterAdmin() are automatically considered safe, as are the strings produced by the render array from the Renderer class.

Filter text also in JavaScript

Server-side text filtering is considered to be one of the best practices. However, there are cases where filtering will also take place on the client side to provide additional temporary filtering capability. It's useful, for example, when rendering the elements that are updated as the user types the text (that is, there are changes to the DOM tree being introduced). To filter text in Drupal by using JavaScript, you should use the Drupal.checkPlain() function. This feature cleans up the text by removing the harmful elements and protects against, for example, some clickjacking attack methods.

Use an abstraction layer when working with a database

We recommend never using pure values in the queries. You should use placeholders instead.

Example:

\Database::getConnection()->query('SELECT foo FROM {table} t WHERE t.name = ‘ . $_GET['user']);

Vs

\Database::getConnection()->query('SELECT foo FROM {table} t WHERE t.name = :name', [':name' => $_GET['user']]);

In the second case, instead of using the value from the user parameter directly, we provide it as the :name placeholder substitute. This way, before putting this value in the final query, Drupal will first clean it from the elements that could cause SQL Injection.

Security audit

The process of "hardening" a site should end with a comprehensive security audit that will catch even more potential threats on your page.

A security audit should include:

Modules and libraries review. This means checking the versions of the installed Drupal modules, reviewing the patches, PHP libraries, and JavaScript.

Configuration review. As part of this activity, we carry out authorization audits for the roles, views, routing.yml files in custom modules, text formats, error logging and forms.

Repository review. We check the custom modules and themes, including routing, custom forms, SQL queries, filtering mechanisms and file permissions.

Repository contents identification. We audit the contents of the settings.php and .env files. We also conduct an audit of deeply hidden elements. It's based on checking the repository for, for example, SSL private keys or database copies or dumps.

You can find the full description of many of the elements presented in the above list in the linked articles.

Drupal security modules - summary

Depending on the level of advancement and knowledge of Drupal, you can introduce appropriate corrections to the application to make it more secure. The examples presented in this article will definitely reduce the number of attack vectors and the likelihood of using them. We recommend analyzing the available options and possibly introducing the changes or new elements that'll reduce the risk of an attack on your application. If you need help with such activities, our Drupal support team can conduct an audit of your website security.

Categories:

PreviousNext: Overview of our Front-end Stack

Fri, 2021/09/24 - 4:12am

Front-end technology stacks tend to move quickly. At PreviousNext, we have been constantly evolving the tech stack to take advantage of best-practice.

In this post, we take a closer look at the front-end tools we use at PreviousNext in 2021 and some of the rationale behind the decisions.

by kim.pepper / 24 September 2021

Our front-end stack consists of the following tools:

  • npm, manages all our dependencies and runs our build scripts.
  • post-css to modernise our CSS.
  • kss-node builds the styleguide.
  • stylelint and eslint lints our CSS and JS.
  • Browsersync is used for testing and CSS live reloading.
  • babel and rollup.js are used to transpile and bundle ES6 js.
NPM

Modern front-end development leverages many open-source libraries for JavaScript. To manage all this, we use npm as the package manager. There was a period where frustrations with performance led to us switching to yarn, but these issues have been resolved in more recent versions of npm, so we switched back.

We also store a number of script aliases in package.json to simplify the day to day task. This includes compiling CSS/JS and generating a styleguide. For example:

$ npm start

will automatically watch for any changes to .css or .js files, will build the CSS, styleguide, and live reload Browsersync.

KSS

KSS Node is a Node.js implementation of Knyle Style Sheets (KSS), "a documentation syntax for CSS" that's intended to have syntax readable by humans and machines.  We use KSS to generate our living styleguides.

Browsersync

We use Browsersync to speed up the feedback loop. Changes to CSS and JS are compiled and automatically sync'd with the browser, so you see changes immediately.

Maintaining coding standards

By default Linting is required for all custom CSS and JS files. This makes code reviews way easier, as we're not having to pick up on style changes, and can focus on the meaningful changes.

We use Stylelint for CSS linting, and ESLint for JavaScript linting with

SMACSS, BEM and DRY

We follow the SMACSS approach to categorisation, breaking CSS down into modular components.

We also follow the basic BEM naming pattern.

When combined with DRY (don’t repeat yourself) approach to CSS in general, this ensures the Drupal theme meets current coding standards.

We use some alternative terminology as these are used in Drupal already (e.g. blocks and modules). They map to the original as follows:

# From SMACSS module = component submodule = variant theme = variant # From BEM block = component modifier = variant CSS Structure and Categorisation

We like to compile CSS files into separate components:

# Custom variables; included in all other files. /src/_constants.css # Base styles; resets, element defaults, fonts, etc. /src/base/* # Layouts and grid systems. /src/layout/* # Form fields. /src/form/* # Components; independently styled components that can live anywhere in a layout. /src/* Testing for accessibility

We regularly run our Drupal theme through Nightwatch Axe to make sure we aren't creating any accessibility errors.

This will review the following (and more):

Mixtape

On top of all this, PreviousNext has developed it's own design system, Mixtape. This allows us to re-use common design components across the sites we develop.

Mixtape provides:

JavaScript ESM

Our JavaScript builds have evolved to leverage ES6 modules/imports and code splitting with  Rollup.

Entry points from custom profiles, modules, and themes are consumed and outputted with common chunks into site wide libraries. You can read more about our approach in our post on  Performance improvements with Drupal 8 Libraries.
 

All JavaScript uses ES6 syntax, which is transpiled using Babel. This allows us to develop using modern JavaScript while still supporting older browsers. See Using ES6 in your Drupal Components.

Summary

Front-end development is constantly evolving, but as you can see, we can keep the front-end development of Drupal sites up to date using the latest tools and techniques.

Tagged Front End Development, JavaScript, CSS
Categories:

Drupal Association blog: You can become a co-maintainer of modules for Drupal 9!

Thu, 2021/09/23 - 10:26pm

Are you looking to take the next step in contributing to Drupal?

At DrupalCon Europe contribution days, 4-7 October (free to all!), one way you can get involved is by offering to co-maintain modules that still need to be updated for Drupal 9. 

You can find a list of available projects here - be sure to check the date in the issue title to ensure the project is eligible for maintainer requests! 

If an issue has already been closed - that means the maintainer has declined new maintainer help, so focus on the open issues only. 

The steps to request co-maintainership are:

  1. Comment on the issue explaining why you would like to maintain the module. 

  2. If the project is opted in to security coverage, confirm that you have previously received security coverage opt-in permission.

  3. If an existing maintainer has not commented, move the issue to the Drupal.org Project Ownership queue by editing the 'Project' field on this issue.

  4. From there, a Drupal.org Site Moderator will review the issue and grant maintainership if the requirements are met. 

Thank you for getting involved and making Drupal even better!

Categories:

Promet Source: Why Open Source is Force for Good Government

Thu, 2021/09/23 - 7:00pm
Last week, one of the largest and most populous counties in the United States launched a new website that a team of us at Promet Source had the privilege to design, build, and manage the content migration from a proprietary CMS.  Seeing this beautiful multi site project through to completion was more than a labor of love. We viewed it as a rescue mission from a costly, locked-in software licensing contract and toward the flexibility and freedom of an open source, Drupal CMS. 
Categories:

Acro Media: Getting started with BigCommerce for Drupal | Acro Media

Thu, 2021/09/23 - 4:00pm

Acro Media’s own Chithra K has put together this handy, step-by-step guide to integrating your BigCommerce store with the Drupal CMS.

BigCommerce for Drupal setup guide

The BigCommerce for Drupal module, created by Acro Media in partnership with BigCommerce, was released early this year and brings together two different platforms – BigCommerce, the open SaaS ecommerce platform, and Drupal, the open source content management system. The result provides a wonderful new way for retailers to implement an innovative and content-rich headless ecommerce strategy. If you use one and would like to have the capabilities of the other, the BigCommerce for Drupal module is the bridge you need. With this module, you can use Drupal as the powerful front-end CMS with BigCommerce as the easy-to-use and scalable ecommerce backend.

This post is a step-by-step guide for people who want to know how to install the BigCommerce for Drupal module and get started with both platforms. If you just want to know more about BigCommerce and Drupal together as an ecommerce solution, check out this post instead.

How this module works

Here’s a quick overview of how this all works. The BigCommerce for Drupal module integrates BigCommerce and Drupal together, but each platform is still used for different tasks.

In BigCommerce, you configure products, categories, shipping, taxes and everything else for the ecommerce side of your site. BigCommerce is also where you go to manage orders as they come in.

Drupal is then used for the website frontend and themeing. Product and category information from BigCommerce are synced to Drupal, importing them as Drupal Commerce products so that they can be displayed and used like any other Drupal-based content. Any non-commerce content is also managed within Drupal. When a customer goes to checkout, a BigCommerce checkout pane is embedded in the Drupal site to securely process payment and save customer and order information.

Setup BigCommerce and Drupal

On to the guide! Follow these steps and you’ll have your BigCommerce and Drupal store configured in no time!

Prerequisites

This guide already assumes that you have the following ready.

  1. A BigCommerce account and store created
    You will need to create a BigCommerce account with at least one product, shipping method and payment method configured in your BigCommerce store. Do this here, not in Drupal.

    NOTE: BigCommerce currently offers a 14-day trial period, so anyone can go and create and configure a store easily for free. For this demo, I signed up for that and created some random products to use for testing.

  2. A working Drupal 8 site
    You should have a Drupal 8 site with the Commerce module enabled and a default store added (via Commerce > Configuration > Store > Stores). You don’t need to do any other setup here yet or enable any of the other Commerce modules like checkout or payment. BigCommerce is going to handle all of this for you.

  3. An SSL certificate for your Drupal site
    Your Drupal website needs to have an SSL certificate active for the BigCommerce checkout form to render. This is required because it ensures security for your customers at checkout, so make sure you install one.
BigCommerce for Drupal setup guide

With the prerequisites done, here’s what you need to do to the BigCommerce for Drupal connection made.

Step 1: Create a BigCommerce API account
  1. Go to your BigCommerce store admin page and navigate to Advanced Settings > API Accounts.

  2. Click on the “Create API Account” button and select “Create V3/V2 API Token”.


    Fig: BigCommerce Store API Accounts page

  3. Provide a name (i.e. Product Sync) and select the scope for each feature (i.e. if you don’t want the ability for the Drupal admin to modify the product information, you can set the scope for “Products” as “read-only”).


    Fig: API configuration in BigCommerce

  4. Click “Save” to save your changes. Once saved, you will see a summary and a prompt to download a file. Download it and keep it safe. Once you create an API account, you can’t modify the keys (but you can always make a new one).


    Fig: BigCommerce API Credentials dialogue box
Step 2: Download and configure the BigCommerce for Drupal module
  1. Get and install the BigCommerce for Drupal module.

    TIP: This module requires a bunch of other modules to work. To get the BigCommerce for Drupal module and all of its dependencies at the same time it’s recommended to use Composer instead of manually downloading it. Running the following command within your Composer-based Drupal project will get everything you need.

    composer require drupal/bigcommerce
  2. In Drupal, navigate to the module configuration page at Commerce > Configuration > BigCommerce > BigCommerce Settings.
    1. Fill in the API Path, Client ID, Secret Key, and Access Token that you received when creating the BigCommerce API.

    2. Hit “Save”. If everything is correct, you will see a message saying “Connected Successfully”.


      Fig: BigCommerce Configuration page in Drupal site
  3. Next, we configure the Channel Settings. This will create a storefront URL for you in BigCommerce which will match the one that is generated on the Drupal side.

    1. Select “Add new channel” from the select channel list.

    2. Provide a channel name.

    3. Click the “Create new BigCommerce channel” button. You will then see a Site ID and Site URL on the setting page.


      Fig: BigCommerce configuration page in Drupal
  4. Now in the same Channel Settings area, click on the “Update BigCommerce Site URL” button. This lets you confirm that the URL generated is actually sent to BigCommerce, otherwise, the checkout form will not be loaded on your Drupal site.

    You can also confirm the channel connection from within the BigCommerce admin dashboard by visiting the Channel Manager admin page.


    Fig: Channel Manager storefront confirmation in BigCommerce
Step 3: Sync products, variations and taxonomies from BigCommerce
  1. In Drupal, navigate to the product synchronization page at Commerce > Configuration > BigCommerce > BigCommerce Product Synchronization.
  2. Click the “Sync Products from BigCommerce” button and ta-da, all the products, variations, and categories will be synced to your Drupal site in an instant.
    Alternatively, you can also synchronize via the following Drush command. Advanced Drupal users can use this command on cron to do automatic syncing.

    drush migrate:import --group bigcommerce
    Fig: Product Synchronization page


    Fig: Syncing from BigCommerce in progress

    NOTE: If you run into errors when syncing products, it is probably because you don’t have a store added in the Drupal Commerce module yet. Add one at Commerce > Configuration > Store > Stores.

    TIP: Any time you make changes to the products in BigCommerce, visit this page or use the Drush command to synchronize the changes. Before syncing, you’ll also see a message telling you that updates are available.

  3. Confirm the products have synced by visiting the Product page for Drupal Commerce at Commerce > Products. A list of all of the products brought in from BigCommerce will appear here.
Step 4: See the BigCommerce checkout in action
  1. Now that everything is set up, go to a product page, and it to your cart and proceed to checkout.

    If everything was done correctly, you will be able to see the BigCommerce checkout form embedded into your Drupal site! Hurray! All of the shipping methods, payment methods, tax calculations, and other BigCommerce store configurations will be seen in the embedded form here.

    If you don’t see the checkout form make sure that your channels settings are correct and that you have an SSL certificate installed.


    Fig: Drupal’s checkout page with embedded BigCommerce checkout form


    Fig: Drupal’s checkout page after order complete

  2. Once an order has been placed, the order information will be stored in Drupal (at Commerce > Orders) and will also be sent to BigCommerce (at Orders > View).


    Fig: BigCommerce backend View Orders page
Additional notes

The BigCommerce for Drupal module is ready for production and available for all to use. When writing this guide, there were some additional notes that I wanted to share.

  • At this time, product management should always be handled within BigCommerce and then synced to Drupal. Currently, there is no option to bring back a product if you delete it on the Drupal side, so be careful.
  • A development roadmap for the module can be found here. It outlines future features and plans.
  • If you use the module and find any bugs or want specific features, please add them to the module issue queue here.
Acro Media is a BigCommerce Agency Partner

Acro Media is the development team partnered with BigCommerce that made the BigCommerce for Drupal module a reality. We have many, many years of ecommerce consulting and development experience available to support your team too.

If you’re interested in exploring Drupal, BigCommerce or both for your online store, we’d love to talk.

Editor’s note: This article was originally published on December 2, 2019, and has been updated for freshness, accuracy and comprehensiveness.

Categories:

robertroose.com: How to create the perfect RSS feed with Drupal 9

Thu, 2021/09/23 - 10:08am

RSS is a great way to syndicate your content, but setting up a feed correctly displaying your articles can be tricky. In this blog post I will show you how to use Views to build the perfect feed in Drupal 9.

Categories:

Redfin Solutions: Upgrading Drupal 7 to Drupal 9: What to expect

Wed, 2021/09/22 - 4:44pm
As a Drupal 7 user or website owner, it’s important to understand what’s next for your web presence as Drupal 7 and Drupal 8 reach their respective end-of-life. This guide will help you understand what to expect so that you can plan accordingly and get a sense for the resources you’ll need to allocate to upgrade Drupal 7 to 9.
Categories:

Tag1 Consulting: On 20 Years of Drupal - an interview with Josh Koenig

Wed, 2021/09/22 - 4:38pm

Drupal has had many, many contributors over its 20 years of existence. These contributors vary from the person answering questions here and there in IRC/Slack and the issue queues, to people who run agencies and hosting companies aimed at keeping Drupal in the public eye. Drupal’s continued success relies on all types of people to keep the drop moving. In this Tag1 Team Talk, we continue to celebrate the 20th anniversary of Drupal. Tag1 Managing Director Michael Meyers is joined by Josh Koenig. Long time Drupal community members will know Josh as one of the founders of ChapterThree, and more recently as a co-founder and Chief Strategy Officer at Pantheon. In this talk, Josh and Michael go back into the history of Drupal, where Josh got started, and how ChapterThree and then Pantheon were formed to meet the needs of Drupal users. --- For a transcript of this video, see Transcript - Josh Koenig on 20 years of Drupal. Click here for a list of other interviews in this series. --- Photo by Gloria Cretu on Unsplash

Read more lynette@tag1co… Wed, 09/22/2021 - 07:38
Categories:

Lullabot: How We Compare: Leaderboards and Related Comparison Metrics in the Drupal Community

Tue, 2021/09/21 - 10:31pm

Whoever said "comparison is the death of joy" was onto something. Comparing ourselves to others can create all kinds of problems, whether we think we are worse, better, or equal. Most of us probably know to avoid comparisons, and yet we can't seem to help ourselves. We do it in our personal lives and in professional settings.

Categories:

Specbee: How to export data from Views using Drupal's Views Data Export module

Tue, 2021/09/21 - 1:33pm
How to export data from Views using Drupal's Views Data Export module Akshay Devadiga 21 Sep, 2021

Oftentimes, we may need to export huge amounts of data from views into files so that it can be used for analysis or administration by non-technical or technical users. Instead of creating a custom module for this, we can leverage the Views Data Export module which is available with a stable release for Drupal 7, 8 and 9 versions.

The Views data export module was designed to provide a way to export the large amount of data from views. It also provides a plugin for progressive batch operations which will improve your website’s performance.

When would you need the Views Data Export Module?

You would use the Views Data Export module for Drupal 8 and Drupal 9 if you want to:

  • Migrate content for different Drupal instances using migrate tools.
  • Perform a feeds migration - which basically does the migration with zero coding but using migrate tools we need to have a custom module with the migration scripts according to the business logic.
  • Generate reports using site data to analyse day-to-day interactions with the website.
Installing the module

It would be best to download the Views Data Export module using composer since the module has a dependency on the CSV Serialization module and other libraries. When you use composer for the installation, the dependencies will be automatically handled.

$ composer require drupal/views_data_export

Next, install the module as you would install any contributed module. Quickest way is to use the drush command line tool to install the module.

$ drush en -y views_data_export

This will install all the required dependent modules.

Let’s Set it Up

After enabling the module, in order to export the views we will first need to create the views and set up the export display with the necessary configurations. Check the detailed explanations for each step that you can follow:

1. Creation of the Views :

Create a master views display according to the requirements with the necessary fields and filters as needed. In our case, we have created the views for listing all the users in the site. Check image below for reference.

 

2. Creating the Export display :

After enabling the module we will get one more button to add the Data export display in the +Add attachment dropdown. Using this, add the display as data export it will create the new data export display by copying all the fields and filters from the master display.

  3. Data export display configurations :

Export display has various configurations that will help in creating the data export of the views in various formats. See the below image that displays all the configurations.

  4. Displaying the page with download button :

Once all the setup is done, save the view and visit the page. Now you will be able to see the download button in the footer region of the views which will download the data export with all the necessary filters if applied. 

Are there any Limitations to this module?

Yes. One of them is that the Drupal 9 version does not support Excel/Xlsx format. Also, batch operations are fully supported only with MySQL databases.

With Drupal 8 and 9’s list of growing modules, there always seems to be a module for that! 
The Views Data export module for Drupal is one such module which is a very handy tool when you want to migrate your views results into CSV, JSON or XML formats. Thus saving you time and effort in writing custom code. Want to discuss with our Drupal experts about a new project that we could help you out with? We’d love to hear from you!

Drupal Drupal 8 Drupal 9 Drupal Development Drupal Module Drupal Planet Drupal Tutorial Subscribe to our Newsletter Now Subscribe

Leave us a Comment

  Recent Blogs Image How to export data from Views using Drupal's Views Data Export module Image How Drupal Empowers Nonprofits in Achieving their Mission Image An Easy Step-by-Step Guide to Writing Your Own Custom Drush 9 (and 10) Commands Explore Our Drupal Services TAKE ME HERE Featured Success Stories

A Drupal powered multi-site, multi-lingual platform to enable a unified user experience at SEMI.

link

Discover how our technology enabled UX Magazine to cater to their massive audience and launch outreach programs.

link

Discover how a Drupal powered internal portal encouraged the sellers at Flipkart to obtain the latest insights with respect to a particular domain.

link

Categories:

Web Wash: Bulk Update Content using View Bulk Operations in Drupal

Tue, 2021/09/21 - 8:45am

View Bulk Operations, commonly referred to as VBO, is a module that allows specifically defined actions that can be simultaneously executed on rows of Views data.

This tutorial will show how to install this module and set up a simple View with a defined action and VBO field. We will then demonstrate how to use VBO to perform this action on selected View rows. We will also show how you can define permissions for roles to use our defined action.

Categories:

Agiledrop.com Blog: 3 key considerations for successful agile transformation

Tue, 2021/09/21 - 8:20am

In this article, we discuss 3 key considerations which can serve as great starting points/guides for an agile transformation.

READ MORE
Categories:

Talking Drupal: Talking Drupal #312 - DrupalPod

Mon, 2021/09/20 - 6:51pm

Welcome to Talking Drupal. Today we are talking about DrupalPod with Ofer Shaal.

TalkingDrupal.com/312

Topics
  • Nic - Voting
  • Tara -
    • Did not found core mentoring :oops:
    • New job – Just started a few weeks ago so I’m still getting up to speed.
  • Ofer -
    • Editoria11y, Drupal module to help content editors with accessibility
    • WordTune - Free AI service, rewrite paragraphs.
  • John - SearchStax
  • Module of the Week - Redirect After Login
  • What is DrupalPod?
  • What was the inspiration for DrupalPod?
  • Who are the maintainers?
  • How does DrupalPod work?
  • Are you looking for help maintaining DrupalPod?
  • Who uses DrupalPod?
  • How much of DrupalPod is Open Source?
  • What are some of the features of DrupalPod we haven’t talked about yet?
  • What improvements are on the roadmap? Does it have a roadmap?
  • Where did the DrupalPod Logo come from, who created it?
  • How can people get involved with DrupalPod?
  • Changing gears a bit before we end, what is it like seeing Drupal rector utilized so extensively across contrib?
Resources

SearchStax Editoria11y WordTune DrupalPod Contributing to Core Gitpod SimplyTestMe GitPod Setup

Guests

Ofer Shaal - @shaal

Hosts

Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Tara King - @sparklingrobots

Categories: