Nu ook in het Nederlands.
 

Planet Drupal

Subscribe to feed Planet Drupal
Drupal.org - aggregated feeds in category Planet Drupal
Bijgewerkt: 32 min 45 sec geleden

NEWMEDIA: Avoiding the "API Integration Blues" on a Drupal Project

8 uur 48 min geleden
Avoiding the "API Integration Blues" on a Drupal ProjectAs Drupal continues to mature as a platform and gain adoption in the enterprise space, integration with one or more 3rd party systems is becoming common for medium to large scale projects. Unfortunately, it can be easy to underestimate the time and effort required to make these integrations work seamlessly. Here are lessons we've learned...

Mailchimp, Recurly, Mollom, Stripe, and on and on—It's easy to get spoiled by Drupal's extensive library of contributed modules that allow for quick, easy, and robust integration with 3rd party systems. Furthermore, if a particular integration does not yet exist, the extendability of Drupal is straightforward enough that it could be done given the usual caveats (i.e. an appropriate amount of time, resources, and effort). However, these caveats should not be taken lightly. Our own experiences have unearthed many of the same pain points again and again, which almost always result in waste. By applying this gained wisdom to all subsequent projects involving integrations, we've been much more successful in identifying and addressing these issues head on. We hope that in sharing what we've learned that you can avoid some of the more common traps.

API and Integration Gotchas Vaporware

Shocking as it may seem, there are situations where a client will assume an API exists when there isn't one to be found. Example: a client may be paying for an expensive enterprise software license that can connect to other programs within the same ecosystem, but there may not be an endpoint that can be accessed by Drupal. They key here is to ensure you have documentation up front along with a working example of a read and/or write operation written in php or through a web services call. Doing this as early as possible within the project will help prevent a nasty surprise when it's too late to change course or stop the project altogether.

Hidden Add-on Fees

An alternative to the scenario above is when an endpoint can be made available for an additional one-time or recurring fee. This can be quite an expensive surprise. It can also result in a difficult conversation with the client, particularly if it wasn't factored into the budget and now each side must determine who eats the cost. The key to preventing this is to verify (up front) if the API endpoint is included with the client's current license(s) or if it will be extra.

Limited Feature Sets

One can never assume that an entire feature set is available. Example: an enterprise resource planning (ERP) software solution may provide a significant amount of data and reporting to its end users, but it may only expose particular records (e.g. users, products, and inventory) to an API. The result: a Drupal site's scope document might include functionality that simply cannot be provided. To avoid this issue, you'll want to get your hands on any and all documentation as soon as possible. You'll also want to create an inventory of every feature that requires a read/write operation so that you can verify the documentation covers each and every item.

Documentation

Transcending the "Drupal learning cliff" was and continues to be a difficult journey for many members of the community despite the abundance of ebooks, videos, and articles on the subject matter. Consider how much more difficult building Drupal sites would be if these resources didn't exist. Now imagine trying to integrate with a systems you've never heard of using a language you're unfamiliar with and no user guide to point you in the right direction.

Sounds scary, doesn't it?

Integrating with a 3rd party application without documentation is akin to flying blind. Sure you might eventually get to the appropriate destination, but you will likely spend a significant amount of time using trial and error. Worse yet, you may simply miss certain pieces of functionality altogether.

The key here, as always, is to get documentation as soon as you can. Also, pay attention to certain red flags, such as the client not having the documentation readily available or requiring time for one of their team members to write it up. This is particularly important if the integration is a one-off that is specific to the customer versus an integration with a widely known platform (e.g. Salesforce or PayPal).

Business Logic

One of Drupal's strengths is the ability for other modules to hook in to common events. For example, a module could extend the functionality of a user saving his or her password to email a notification that a password was changed.When integrating with another system, it's equally important to understand what events may be triggered as a result of reading or writing a record. Otherwise, you may be in for a surprise to find out the external system was firing off emails or trying to charge credit card payments.

Documentation is invaluable to prevent these types of gaffs. However, in our experience it has been important to have access to a support resource that can provide warnings up front.

Support

What happens when the documentation is wrong or the software doesn't work? If support regarding the API is slow or non-existant, the project may grind to halt until this block is removed. For enterprise level solutions, there is usually some level of support that can be accessed via phone, forums, or support tickets. However, there can sometimes be a sizable fee for this service and your particular questions might not be in scope with respect to what their service provides. In those instances, it might be helpful to contract with a 3rd party vendor or contractor that has performed a similar integration in the past. This can be costly up front while saving a tremendous of time over the course of the project.

Domain Knowledge

As consultants, one of our primary objectives is to merge our expertise with that of the customer's domain knowledge in order to best achieve their goals. Therefore, it's important that we understand why the integration should work the way it does instead of just how we read and write data back and forth. A great example of this involves integrating Drupal Commerce with Quickbooks through the Web Connecter application. It's imperative to understand how the client's accounting department configures the Quickbooks application and how it manages the financial records. Otherwise a developer may make an assumption that results in an inefficient or (worse) incorrect functionality.

Similar to having a resource available for support on the API itself, it's invaluable to have access to team members on the client side that use the software on a daily basis so that nothing is missed.

Stability

Medium to large sized companies are becoming increasingly reliant on their websites to sustain and grow their businesses. Therefore, uptime is critical. And if the site depends on the uptime of a 3rd party integration to function properly, it may be useful to consider some form of redundancy or fallback solution. It is also important to make sure that support tickets can be filed with a maximum response time specified in any service licensing agreement (SLA) with the client.

Communication and Coordination

The rule of thumb here is simple: more moving parts in a project equals more communication time spent keeping everyone in sync. Additionally, it's usually wise to develop against an API endpoint specifically populated with test data so that it will not impact the client's production data. At some point, the test data will need to be cleared out and production data will need to be imported. This transition could be as simple as swapping a URL or it could involve significant amount of QA time testing and retesting full imports before making the final switch.

The best way to address these issues is simply to buffer in more communication time into the budget than a normal Drupal project.

SDKs

One gotcha that can be particularly difficult to work around is that an API may require you to use their specific software development kit (SDK) instead of a native PHP library. This may require the server to run a different OS (Windows instead of Linux) and web server (IIs instead of Apache). If you're not used to developing on these platforms, development time may be slowed down by a significant percentage. For example: a back-end developer may not be able to use the same IDE they are accustomed to (with all of their optimized configurations and memorized shortcuts). This requirement may be unavoidable in some circumstances, so the best way to deal with these situations is a simple percentage on the budgeted hours.

VMs

When possible, it is ideal for developers can work on their own machine locally with a fully replicate instance of the API they are interacting with. Example: Quickbooks connecting through their Web Connector application to read and write records from a Drupal Commerce site. To test this connection, it is extremely helpful to have a local virtual machine (VM) with Windows and Quickbooks, which a developer could then use to trigger the process. If a project involves multiple developers, they could each have their own copy to use a sandbox.

Setting up a local VM definitely adds an upfront cost to create. However, for larger projects this investment can generally be recouped many times over with respect to increased development speed and the ability to start from a consistent target.

Final Advice

By now, we hope that we've made the case that it's important to do your due diligence when taking on project involving integrations. And while this entire list of potential pain points may seem overkill, we've personally experienced the effects of every one of them at some point in our company's history. Ultimately, both you and the client want to avoid the uncomfortable conversation of a project's timeline slipping and going over budget. Therefore, it's critical to have address these issues thoroughly and as early in the project as possible. If uncertainty is especially high, it's usually beneficial to include a line item within the project statement of work to evaluate this piece separately. Finally, if you're able to effectively negotiate the terms of a contract, the budget for the integration shouldn't be set until an evaluation (even a partial one) has been completed.

Thoughts? Story to share? We'd love to get your feedback on how to improve upon this article.

Categorieën:

Gizra.com: Headless Drupal - Form API, Drupal 9

wo, 2014/07/23 - 11:00pm
Defining moment

A few months ago my DrupalCon Austin session was rejected. I was a bit upset, since presenting plays a big part in my trip to the states, and also surprised, as I mistakenly assumed my presentation repertoire would almost guarantee my session would be accepted. But the committee decided differently.

This has been an important moment for me. Two days later I told myself I don't care. I mean, I cared about the presentation, I just stopped caring that it was not selected, since I decided I was going to do it anyway. As an "unplugged" BoF.

The Gizra Way. I think this is probably the best presentation I've given so far, and quite ironically my rejected session is second only to Dries's keynote in YouTube.

You see - I had a "there is no spoon" moment. The second I realized it can be done differently, I was on my own track, perhaps even setting the path for others.

Form API, Drupal 9

I use Drupal because Form API is so great No one, ever

Continue reading…

Categorieën:

Acquia: Search in Drupal 8 - Thomas Seidl & Nick Veenhof

wo, 2014/07/23 - 5:57pm

Thomas Seidl and Nick Veenhof took a few minutes out of the Drupal 8 Search API code sprint at the Drupal DevDays in Szeged, Hungary to talk with me about the state-of-play and what's coming in terms of search in Drupal: one flexible, pluggable solution for search functionality with the whole community behind it.

Categorieën:

Clemens Tolboom: Interested in ReST and HAL?

wo, 2014/07/23 - 4:42pm
  1. Checkout the issue queue for HAL and ReST.
  2. Use the quickstart tool: https://github.com/build2be/drupal-rest-test.
Need HAL?
  1. Install HAL Browser on your site to see what we got till now.
  2. cd drupal-root
Categorieën:

Acquia: Migration Tips and Tricks

wo, 2014/07/23 - 3:18pm

Cross-posted with permission from nerdstein

The Migrate module is, hands down, the defacto way to migrate content in Drupal. The only knock against it, is the learning curve. All good things come to those who take the time and learn it.

Categorieën:

Code Karate: Drupal 7 Splashify

wo, 2014/07/23 - 1:36pm
Episode Number: 158

In this episode we cover the Splashify module. This module is used to display splash pages or popups. There are multiple configuration options available to fit your site needs.

In this episode you will learn:

  • How to set up Splashify
  • How to configure Splashify
  • How to get Splashify to use the Mobile Detect plugin
  • How Splashify displays to the end user
  • How to be awesome
Tags: DrupalDrupal 7Drupal PlanetUI/DesignJavascriptResponsive Design
Categorieën:

Drupal Association News: Building the Drupal Community in Vietnam: Seeds for Empowerment and Opportunities

wo, 2014/07/23 - 6:38am

With almost 90 million people, Vietnam has the 13th largest population of any nation in the world. It's home to a young generation that is very active in adopting innovative technologies, and in the last decade, the country has been steadily emerging as an attractive IT outsourcing and staffing location for many Western software companies.

Yet amidst this clear trend, Drupal has emerged very slowly in Vietnam and all of Asia as a leading, enterprise-ready Framework (CMF). However, this is changing as one Drupalista works hard to grow the regional user base.

How it all started

Tom Tran, a German with Hanoian roots, discovered Drupal in 2008. He was overwhelmed by the technological power and flexibility that makes Drupal such a highly competitive platform, and was amazed by the friendliness and vibrancy of the global community. He realized that introducing the framework and the Drupal community to Vietnam would help local people the opportunity to access the following three benefits:

  • Steady Income: Drupal won’t make you an overnight millionaire, however if you become a Drupal expert and commit to helping clients to achieve their goals, you will never be short of work. Quality Drupal specialists are in huge demand across the world and this demand won’t stop any time soon as Drupal adoption grows.
  • Better Lifestyle: You are free and able to design a work/lifestyle balance on your terms. You can work from home or contribute remotely while traveling, as long as you continue to deliver sustainable value to your client. Many professionals in less developed countries like Vietnam have never imagined this opportunity-- and learning about this lifestyle can be very empowering and inspirational.
  • Cross Cultural Friendships: In spite of national borders and cultural differences, Tom has established fruitful partnerships between his development team from Vietnam and clients from across the globe. Whether clients are based in California, Berlin, Melbourne or Tokyo, his team has successfully collaborated on many projects and often became good friends beyond just project mates. These relationships can only grow thanks to the open Drupal community spirit and the way it connects peoples from all regions and cultures from around the world.

Tom started by organizing a Drupal 7 release party in Hanoi in January 2011. Afterwards, he reached out to Drupal enthusiasts in the region and organized informal coffee sessions, which have contributed to the growth of a solid, cohesive community in Vietnam.

Drupal Vietnam College Tour

With help from a Community Cultivation Grant, Tom put on workshops every three months at Vietnamese universities and colleges in 2012. By showcasing the big brands and institutions using Drupal, a diverse series of use cases demonstrate that the demand for Drupal is high, and that the Drupal industry is a great place to be. A three hour hands-on session walks students through the basics of sitebuilding with Drupal-- and it's at this point that most students get hooked.

March 2012
First ever Drupal Hanoi Conference at VTC Academy, with 120 visitors (facebook gallery)

June 2012
Hello Drupal workshop @ Tech University Danang (gallery)

 

July 2012
Drupal Workshop @ FTP-Aptech (fb gallery, fpt aptech news)

 

September 2012
Drupal Workshop @ NUCE (gallery, Nuce news)

 

November 2012
Drupal Workshop @ FTP University (gallery)

 

December 2012
Drupal Workshop @ Aiti-Aptech (gallery)

 

December 2012
Drupal talk & sponsorship for PHPDay.vn 2012 (local images 2x)

The results was an overall increase in members and growing everyday. Stats in 2014:

What’s next?

Tom is currently planning to organize the first DrupalCamp in Hanoi / Vietnam in late 2014. Today Drupal Vietnam has only roughly 1300 members, (less than LA DUG) but with a growing pool of software engineers graduating each year, this country is set to become a relevant resource of highly skilled developers, provided high quality training is affordable and access to jobs can be facilitated. Things look very bright in Vietnam!

Supporters About

Tom is founder of Geekpolis, a software company with a development center based in Hanoi, Vietnam. Geekpolis focuses on high-quality managed Drupal development services for bigger consultancy agencies. Currently the team is comprised of 25 engineers.

To get involved, contact Tom at:
Categorieën:

Drupal core announcements: Drupal 7.30 release this week to fix regressions in the Drupal 7.29 security release

wo, 2014/07/23 - 6:06am
Start:  2014-07-23 (All day) - 2014-07-25 (All day) America/New_York Sprint Organizers:  David_Rothstein

The Drupal 7.29 security release contained a security fix to the File module which caused some regressions in Drupal's file handling, particularly for files or images attached to taxonomy terms.

I am planning to release Drupal 7.30 this week to fix as many of these regressions as possible and allow more sites to upgrade past Drupal 7.28. The release could come as early as today (Wednesday July 23).

However, to do this we need more testing and reviews of the proposed patches to make sure they are solid. Please see #2305017: Regression: Files or images attached to certain core and non-core entities are lost when the entity is edited and saved for more details and for the patches to test, and leave a comment on that issue if you have reviewed or tested them.

Thank you!

Categorieën:

Mediacurrent: Understanding the Role of the Enterprise in Drupal

wo, 2014/07/23 - 3:53am

There is a trending topic I am seeing being discussed a lot more in the open-source software and Drupal community. The point of conversation focuses on what the role should be of enterprise organizations?  Especially, those that are or have already adopted Drupal as their web platform of choice.

Categorieën:

Greater Los Angeles Drupal (GLAD): Drupal Migrate using xml 0 to 35

di, 2014/07/22 - 8:55pm

Using Drupal Migrate is a great way to move you content into Drupal. Unfortunately the documentation for xml import can be obscure. This comes about when those that developed the module try to communicate how they did what they did to someone that did not do the work. Things that seem obvious to them are not to someone else.

I have spent some time recently importing content using xml. In no way am I an expert that is speeding down the fast lane, something more in the cruising around town at a comfortable 35 mph.

To use Drupal Migrate you need to define your own class. A class is php code that is used in Object Oriented Programming that defines your data and defines how you can manipulate your data. Most of the actual migration work is done with the classes provide by the migrate module, you simply have to define the details of your migration.

Constructor - The constructor modifies the migration modules classes to define your specific data. I was able to follow the SourceList method, this provides one xml (file or feed) that contains the ID number for all the content you want to import, and a second (file or feed) that contains the content. The wine example migrate has this but understanding what it really wants is more difficult to understand.

Below is my class file explained:
=====================
<?php

/**
* @file
* Vision Article migration.
*/

/**
* Vision Article migration class.
*/
class VisionArticleMigration extends XMLMigration {
public function __construct() {
parent::__construct();
$this->description = t('XML feed of Ektron Articles.');

---------------
So far pretty easy. You need to name your class, extend from the proper migration. and give it an extension.

-----------------

// There isn't a consistent way to automatically identify appropriate
// "fields" from an XML feed, so we pass an explicit list of source fields.
$fields = array(
'id' => t('ID'),
'lang_type' => t('Language'),
'type' => t('Type'),
'image' => t('Image'),
'authors' => t('Authors'),
'article_category' => t('Article Category'),
'article_series_title' => t('Article Series Title'),
'article_part_no' => t('Article Series Part Number'),
'article_title' => t('Article Title'),
'article_date' => t('Article Date'),
'article_display_date' => t('Article Display Date'),
'article_dropheader' => t('Article Dropheader'),
'article_body' => t('Article Body'),
'article_author_name' => t('Article Author Name'),
'article_author_url' => t('Article Author Email Address'),
'article_authors' => t('Article Additional Authors'),
'article_postscript' => t('Article Postscript'),
'article_link_text' => t('Article Link text'),
'article_link' => t('Article Link'),
'article_image' => t('Article Image'),
'article_image_folder' => t('Article Image Folder'),
'article_image_alt' => t('Article Image Alt'),
'article_image_title' => t('Article Image Title'),
'article_image_caption' => t('Article Image Caption'),
'article_image_credit' => t('Article Image Credit'),
'article_sidebar_element' => t('Article Side Bar Content'),
'article_sidebar_element_margin' => t('Article Margin between Sidebar Content'),
'article_archived_html_content' => t('Article HTML Content from old system'),
'article_video_id' => t('Article ID of Associated Video Article'),
'metadata_title' => t('Metadata Title'),
'metadata_description' => t('Metadata Description'),
'metadata_keywords' => t('Metadata Keywords'),
'metadata_google_sitemap_priority' => t('Metadata Google Sitemap Priority'),
'metadata_google_sitemap_change_frequency' => t('Metadata Google Sitemap Change Freequency'),
'metadata_collection_number' => t('Metadata Collection Number'),
'title' => t('Title'),
'teaser' => t('Teaser'),
'alias' => t('Alias from old system'),
'taxonomy' => t('Taxonomy'),
'created_date' => t('Date Created')
);

-------------------
So what doe this mean?
You will need a field name below. It has nothing to do with your xml file, you will need a field for each thing you want to import. Such as article_image_alt is the alt text for the image. Later you will define the xpath to load this variable. This will start to come together below, just remember each unique piece of information needs a variable.

---------------------

// The source ID here is the one retrieved from the XML listing URL, and
// used to identify the specific item's URL.
$this->map = new MigrateSQLMap($this->machineName,
array(
'ID' => array(
'type' => 'int',
'unsigned' => TRUE,
'not null' => TRUE,
'description' => 'Source ID',
)
),
MigrateDestinationNode::getKeySchema()
);

---------------------
This has to do with setting up the migration table in the database. This has to do with the input database, the Source ID is the field in the input file that has the pointer to the data record. My source file looks like:

567
1054

So we need a table with a field for the id which an integer.

-----------------------

// Source list URL.
$list_url = 'http://www.vision.org/visionmedia/generateexportlist.aspx';
// Each ID retrieved from the list URL will be plugged into :id in the
// item URL to fetch the specific objects.
// @todo: Add langtype for importing translated content.
$item_url = 'http://www.vision.org/visionmedia/generatecontentxml.aspx?id=:id';

// We use the MigrateSourceList class for any source where we obtain the
// list of IDs to process separately from the data for each item. The
// listing and item are represented by separate classes, so for example we
// could replace the XML listing with a file directory listing, or the XML
// item with a JSON item.
$this->source = new MigrateSourceList(new MigrateListXML($list_url),
new MigrateItemXML($item_url), $fields);

$this->destination = new MigrateDestinationNode('vision_article');

-----------------

Now we are setting up the magic. We setup a list url that contains the ID's of all the content to import, then another one that uses this ID to fetch the details for this ID. Then you tell Migrate to use the MigrateListXML to find the items to import with MigrateItemXML. Then finally in the MigrateDestinationNode to tell Migrate which content type to use. This means we need a separate migration class for each content type to import. I have been creating each class in it's own inc file and adding this to the files section in the info file.

-----------------

// TIP: Note that for XML sources, in addition to the source field passed to
// addFieldMapping (the name under which it will be saved in the data row
// passed through the migration process) we specify the Xpath used to retrieve
// the value from the XML.
$this->addFieldMapping('created', 'created_date')
->xpath('/content/CreateDate');

------------------
Now we map the source field with the destination field. Created is the field name in the content type (vision_article), created_date is from our fields section above. Remember I said we needed a definiation for each part of the content we want to import. The xpath then points to the data in the xml feed. So this says take the content of the /contnet/CreateDate in the xml file and load this into the source variable created_date, then store this in the created field in a new vision_article content item. I say this in this way because if you do like me and cut and paste and forget to change the source varable, the source varable will contain the bottom data from xpath.

------------------

$this->addFieldMapping('field_category', 'article_category')
->defaultValue(1)
->xpath('/content/html/root/article/Category');

-------------------

You can set a default value in case the xml does not contain any data

----------

$this->addFieldMapping('field_series_title', 'article_series_title')
->xpath('/content/html/root/article/ArticleSeriesTitle');
$this->addFieldMapping('field_part_number', 'article_part_no')
->xpath('/content/html/root/article/ArticlePartNo');
$this->addFieldMapping('field_h1_title', 'article_title')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/Title');
$this->addFieldMapping('field_display_date', 'article_display_date')
->xpath('/content/html/root/article/DisplayDate');
$this->addFieldMapping('field_drophead', 'article_dropheader')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/Taxonomy');

-------------

Another field argument, the default content type is plain text, so if your content contains HTML you need to set the correct format here.

---------------

$this->addFieldMapping('body', 'article_body')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/Body');
$this->addFieldMapping('body:summary', 'teaser')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/Teaser');

-----------

Note you can set the teaser as a part of the body. One of the drush migrate commands make is easy to discover the additional parts of your content field, drush mfd (Migrate Field Destinations). This will display all the destination fields and their options.

------------

$this->addFieldMapping('field_author', 'article_author_email')
->xpath('/content/html/root/article/AuthorURL');
$this->addFieldMapping('field_author:title', 'article_author_name')
->xpath('/content/html/root/article/AuthorName');
$this->addFieldMapping('field_ext_reference_title', 'article_postscript')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/Postscript');

---------
see explanation below
--------
$this->addFieldMapping('field_article_image:file_replace')
->defaultValue(MigrateFile::FILE_EXISTS_REUSE); //FILE_EXISTS_REUSE is in the MigrateFile class
$this->addFieldMapping('field_article_images', 'article_image')
->xpath('/content/html/root/article/Image/File/img/file_name');
$this->addFieldMapping('field_article_images:source_dir', 'article_image_folder')
->xpath('/content/html/root/article/Image/File/img/file_path');
$this->addFieldMapping('field_article_images:alt', 'article_image_alt')
->xpath('/content/html/root/article/Image/File/img/@alt');
$this->addFieldMapping('field_article_images:title', 'article_image_title')
->xpath('/content/html/root/article/Image/File/img/@alt');

--------------

This section gets tricky. You are importing an Image or other file. The default migration for a file is MigrateFileUrl. You can migrate all your files ahead of time or as I am doing do it inline. The main components for this is the main field, which is the file name, and the source_dir for the path to this image. Drual 7 has a database table for the files is uses with the url to the file. MigrateFile then uploads this file to the public folder and creates an entry into the files_,amaged table to indicate the url. What I did was copy all the images to a public location on S3 storage so I did not want Migrate to create a new file but use the existing file. Thus the file_replace setting to the constant MigrateFile::FILE_EXISTS_REUSE. This tells migrate to use the existing file and make an entry in the file_managed table for this file.

Later in the PrepareRow method I will show how we separate this and add it to the xml.

------------

$this->addFieldMapping('field_archive', 'article_archived_html_content')
->xpath('/content/archive_html');
$this->addFieldMapping('field_ektron_id', 'id')
->xpath('/content/ID');
$this->addFieldMapping('field_ektron_alias', 'alias')
->xpath('/content/html/Alias');
$this->addFieldMapping('field_sidebar', 'article_sidebar_element')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/SidebarElement/SidebarElementInformation');
$this->addFieldMapping('field_slider_image:file_replace')
->defaultValue(MigrateFile::FILE_EXISTS_REUSE); //FILE_EXISTS_REUSE is in the MigrateFile class
$this->addFieldMapping('field_slider_image', 'image')
->xpath('/content/Image/file_name');
$this->addFieldMapping('field_slider_image:source_dir', 'image_folder')
->xpath('/content/Image/file_path');
$this->addFieldMapping('field_slider_image:alt', 'image_alt')
->xpath('/content/Title');
$this->addFieldMapping('field_slider_image:title', 'image_title')
->xpath('/content/Title');
$this->addFieldMapping('title', 'title')
->xpath('/content/Title');
$this->addFieldMapping('title_field', 'title')
->xpath('/content/Title');

// Declare unmapped source fields.
$unmapped_sources = array(
'article_author_url',
'article_authors',
'article_sidebar_element_margin',
'article_video_id',
'metadata_title',
'metadata_description',
'metadata_keywords',
'metadata_google_sitemap_priority',
'metadata_google_sitemap_change_frequency',
'metadata_collection_number',

);

-------------

If you are not using a source field, best practices state that you declare it in the unmapped sources

------------

$this->addUnmigratedSources($unmapped_sources);

// Declare unmapped destination fields.
$unmapped_destinations = array(
'revision_uid',
'changed',
'status',
'promote',
'sticky',
'revision',
'log',
'language',
'tnid',
'is_new',
'body:language',
);

----------------------

If you are not using a destination field best practices state that you declare in the unmaped destinations array. Note if you later use this field you need to remove it from the unused array.

---------------------

$this->addUnmigratedDestinations($unmapped_destinations);

if (module_exists('path')) {
$this->addFieldMapping('path')
->issueGroup(t('DNM'));
if (module_exists('pathauto')) {
$this->addFieldMapping('pathauto')
->issueGroup(t('DNM'));
}
}
if (module_exists('statistics')) {
$this->addUnmigratedDestinations(array('totalcount', 'daycount', 'timestamp'));
}
}

------------

The rest of the constructor is from the example. Did not cause me a problem so did not worry about it.

------------
/**
* {@inheritdoc}
*/

---------------

Now we can add our own magic. We can effect the data from the content item before it is saved in to the content item.

-----------------

public function prepareRow($row) {
if (parent::prepareRow($row) === FALSE) {
return FALSE;
}
$ctype = (string)$row->xml->Type;
//set variable for return code
$ret = FALSE;
//dpm($row);

------------

You will see these scattered through the prepareRow function. These are the devel command to print to the screen for debuging. They should be commented out but you can see the process I went through to debug my particular prepareRow. Also note this is a great use of the Migrate UI, these print statment only help you in the web interface, if you use Drush you will not see these diagnostic prints.

---------------

if ($ctype == '12'){

---------------

This is specific to my migrate. The following code is only applicable to a content type of 12. The other content types have a different data structure. If prepareRow returns False the row will be skipped.

------------------

// Map the article_postscript source field to the new destination fields.
//if((string)$row->xml->root->article->Title == ''){
// $row->xml->root->article->Title = $row->xml->root->Title;
//}
$postscript = $row->xml->html->root->article->Postscript->asXML();
$postscript = str_replace('','',$postscript);
$postscript = str_replace('','',$postscript);
$row->xml->html->root->article->Postscript = $postscript;

-------------------

Again this is something unique to my migrate. The content structure is contained in xml so the HTML is recognized by SimpleXML as xml. So the asXML() function returns a string containing the xml of the node. Now I can save this string to the node and it becomes a string node and is back to straight html. So I need to do this for all the nodes that contain html. Most of the time you will be able to pass the html string as a node and will not have to do this transform.

-------------------

//converts html nodes to string so they will load.
$body = $row->xml->html->root->article->Body->asXML();
$body = str_replace('','',$body);
$body = str_replace('','',$body);
$row->xml->html->root->article->Body = $body;
$title = $row->xml->html->root->article->Title->asXML();
$title = str_replace('','',$title);
$title = str_replace('','',$title);
$row->xml->html->root->article->Title = $title;
$drophead = $row->xml->html->root->article->Dropheader->asXML();
$drophead = str_replace('','',$drophead);
$drophead = str_replace('','',$drophead);
//If Dropheader is empty
$drophead = str_replace('','',$drophead);
$row->xml->html->root->article->Dropheader = $drophead;
//Array to allow conversion of Category text to IS
$cat_tax = array(
'Science and Environment' => 1,
'History' => 2,
'Social Issues' => 3,
'Family and Relationships' => 4,
'Life and Health' => 5,
'Religion and Spirituality' => 6,
'Biography' => 7,
'Ethics and Morality'=> 8,
'Society and Culture' => 9,
'Current Events and Politics' => 10,
'Philosophy and Ideas' => 11,
'Personal Development' => 12,
'Reviews' => 13,
'From the Publisher' => 14,
'Interviews' => 17,
);
//Convert additional taxonomies to tags
//$tax_id_in = (string)$row->xml->Taxonomy;
//$tax_id_array = explode(',',$tax_id_in);
//$tax_in_array = array();
//foreach($tax_id_array as $tax){
// If(is_null($cat_tax[tax]))
// $tax_in_array[] = $cat_tax[$tax];
//}
//$new_tax = implode(',',$tax_in_array);
//dpm($new_tax);
//dpm($row);
//$row->xml->Taxomomy = $new_tax;
// Change category text to ID
$category = (string)$row->xml->html->root->article->Category;
//Specify unknown category if we do not recognize the category
//This allows the migrate and allow us to fix later.
$tax_cat = $cat_tax[trim($category)];
//dpm($category);
if(is_null($tax_cat)) {$tax_cat = 18;}
//dpm($tax_cat);
$row->xml->html->root->article->Category = $tax_cat;

-------------

The category field in the source is a text field. The categories are a entity reference to a taxonomy field, which requires an id rather than text. I manually setup the categories ahead of time so I created an array that has the text as the key and the is as the content. Then you can use this to quickly look up the id for the text in he category field. Then we can replace the text in Category with the id. This works, another way to do this is migrate the categories first then use this migration to translate this for you. This is a feature built into migrate. The explanation of this will come later.

----------------

//modify the image file node.
//dpm((string)$row->xml->ID);
if((string)$row->xml->html->root->article->Image->File->asXML() != ''){
//dpm((string)$row->xml->html->root->article->Image->File->asXML());
$src = (string)$row->xml->html->root->article->Image->File->img->attributes()->src;
$src_new = str_replace('/visionmedia/uploadedImages/','http://assets.vision.org/uploadedimages/',$src);
$row->xml->html->root->article->Image->File->img->attributes()->src = $src_new;
$file_name = basename($src_new);
$file_path = rtrim(str_replace($file_name,'', $src_new), '/');;
$row->xml->html->root->article->Image->File->img->addChild('file_name',$file_name);
$row->xml->html->root->article->Image->File->img->addChild('file_path',$file_path);
}

--------------

There is alot of stuff here. Remember for the MigrateFile you need to present the file name and source directory. The Image/File node contains an img tag. So we need to get the scr attribute and extract the file name and source directory. So why the if? Migrate will import a null node as null, but this is php code running on the row. If you try to get the src attribute on a null node it will throw an error. So the if statement checks to see if the File node is empty (only contains /File) and skips this tranformation, Migrate will simply import a null or empty field.

The src is the relative path to the website, so the first thing we do is change this to full url to the s3 content storage. The path is basically the same except in the uploadedimages the i in the database is uppercase. This was a Windows server so it did not make a difference but the s3 url is case sensitive. We then use base name to extract the file name and use this to remove the file from the path for the file path and create a new child in the xml row to store these. I did not point this out but this is the xpath use in the field mapping above.

--------------

$email = (string)$row->xml->html->root->article->AuthorURL;
if (!empty($email)){
$email = 'mailto:'.$email;
$row->xml->html->root->article->AuthorURL = $email;
}

-------------

The author url is the email to the author of the article. We turn this into a mailto link so that it will generate a link to send the author an email.

---------------

$archive_html = (string)$row->xml->html->asXML();
$row->xml->addChild('archive_html',$archive_html);
$sidebar_element = (string)$row->xml->html->root->article->SidebarElement->SidebarElementInformation->asXML();
$row->xml->html->root->article->SidebarElement->SidebarElementInformation = $sidebar_element;
$slider_src = (string)$row->xml->Image;
$slider_src_new = str_replace('/visionmedia/uploadedImages/','http://assets.vision.org/uploadedimages/',$slider_src);
$row->xml->Image = $slider_src_new;
$slider_file_name = basename($slider_src_new);
$slider_file_path = rtrim(str_replace($slider_file_name,'', $slider_src_new), '/');;
$row->xml->Image->addChild('file_name',$slider_file_name);
$row->xml->Image->addChild('file_path',$slider_file_path);
//dpm($row);

---------------

The rest is repitition of the above techniques. Note that we return TRUE if we want to process the row and false if we do not want to process the row.

-----------------

$ret=TRUE;
//dpm($src);
}
//Need to add processing for other Article Content types especially 0 (HTML content)
//dpm($row);
return $ret;
}

}

----------

This is the class I use for one of the imports. I told you that I would show the use of another migrate in the field mappings. Below is a snippet of code from the issues migration. The issue contains entity reference to vision_articles that were imported from above.

-------------

$this->addFieldMapping('field_articles', 'article_id')
->sourceMigration('VisionArticle')
->xpath('/item/articles/article/ID');

--------------

So this says use the VisionArticle (I will show you were to find this next), it knows to look up the source ID and relate it to the DestinationID and store this in the field_articles field.

---------------

Migrate has been around for a while. Initially they said that the class would automaticall be registed and you could manually register them if needed. Then they changed to say that they will not manually register and you should register your classes. So you should have as part of your migration module the following that will register your classes. Note the name of the array element is the name used above.

----------------

function vision_migrate_migrate_api() {
$api = array(
'api' => 2,
// Give the group a human readable title.
'groups' => array(
'vision' => array(
'title' => t('Vision'),
),
),
'migrations' => array(
'VisionArticle' => array('class_name' => 'VisionArticleMigration'),
'VisionIssue' => array('class_name' => 'VisionIssueMigration'),
'VisionVideoArticle' => array('class_name' => 'VisionVideoArticleMigration'),
'VisionFrontpage' => array('class_name' => 'VisionFrontpageMigration'),
),
);

return $api;
}

----------------

I hope this makes things a little easier to understand. You will need some basic module building skills, knowing the file names and things like that, but this should help you through the more obscure parts of creating your migration class.

Tags: Planet Drupal
Categorieën:

Drupal Association News: Why we moved Drupal.org to a CDN

di, 2014/07/22 - 7:54pm

As of a little after 19:00 UTC on 2 July 2014, Drupal.org is now delivering as many sites as possible via our EdgeCast CDN.

Why a CDN?

We are primarily concerned with the network level security that a CDN will provide Drupal.org.

The CDN enables us to restrict access to our origin servers and disallow directly connecting to origin web nodes (which is currently possible). The two big advantages are:

  1. Accelerate cacheable content (static assets, static pages, etc).
  2. Allow us to easily manage network access and have a very large network in front of ours to absorb some levels of attacks.

Here are some examples of how the CDN helps Drupal.org:

  • We were having issues with a .js file on Drupal.org. The network was having routing issues to Europe and people were complaining about Drupal.org stalling on page loads. There was basically nothing we could do but wait for the route to get better. This should never be a problem again with EdgeCast's global network.
  • We constantly have reports of updates.drupal.org because blacklisted because it serves a ton of traffic coming in and out of a small number of IP addresses. This should also not happen again because the traffic is distributed through EdgeCast's network.
  • A few months ago we were under consistent attack from a group of IPs that was sub-HTTP and was saturating the origin network's bandwidth. We now have EdgeCast's large network in front of us that can 'take the beating'.
updates.drupal.org

By enabling EdgeCast's raw logs, rsync, and caching features, we were able to offload roughly 25 Mbps of traffic from our origin servers to EdgeCast. This change resulted in a drastic drop in origin network traffic, which freed up resources for Drupal.org. The use of rsync and the raw log features of EdgeCast enabled us to continue using our current project usage statistics tools. We do this by syncing the access logs from EdgeCast to Drupal.org’s utility server that processes project usage statistics.

Drupal.org

Minutes after switching www.drupal.org to use the CDN, there were multiple reports of faster page load times from Europe and North America.

A quick check from France / webpagetest.org:
Pre-CDN results: first page load=4.387s. repeat view=2.155s
Post-CDN results: first page load=3.779s, repeat view=1.285s

Why was the www.drupal.org rename required?

Our CDN uses a combination of Anycast IP addresses and DNS trickery. Each region (Asia, North America, Europe, etc.) has an Anycast IP address associated with it. For example cs73.wac.edgecastcdn.net might resolve to 72.21.91.99 in North America, and 117.18.237.99 in Japan.

Since 72.21.91.99, 117.18.237.99, etc. are Anycast IPs, generally their routes are as short as possible, and the IP will route to whatever POP is closest. This improves network performance globally.

Why can't drupal.org be a CNAME?

The DNS trickery above works by using a CNAME DNS record. Drupal.org must be an A record because the root domain cannot be a CNAME. MX records and any other records are not allowed by the RFC on CNAME records. To work around this DNS limitation, Drupal.org URLs are now redirected to www.drupal.org.

 

 

Related issues
https://www.drupal.org/node/2087411
https://www.drupal.org/node/2238131

Categorieën:

Stanford Web Services Blog: Cherry Picking - Small Git lesson

di, 2014/07/22 - 6:56pm

Small commits allow for big wins.

Something that I have been using a lot lately is GIT's cherry pick command. I find the command very usefull and it saves me bunches of time. Here is a quick lesson on what it does and an example use case.

What is GIT cherry-pick? man page

Git cherry pick allows you to merge a single commit from one branch into another.  To use the cherry pick command follow these steps:

Categorieën:

2bits: Improve Your Drupal Site Performance While Reducing Your Hosting Costs

di, 2014/07/22 - 5:00pm
We were recently apporached by a non-profit site that runs on Drupal. Major Complains Their major complaint was that the "content on the site does not show up". The other main complain is that the site is very slow. Diagnosis First ... In order to troubleshoot the disappearing content, we created a copy of the site in our lab, and proceeded to test it, to see if we can replicate the issues.

read more

Categorieën:

Drupalize.Me: Drupal 8 Has All the Hotness, but So Can Drupal 7

di, 2014/07/22 - 3:30pm

Drupal 8 is moving along at a steady pace, but not as quickly as we all had hoped. One great advantage this has is it gives developers time to backport lots of the features Drupal 8 has in core as modules for Drupal 7. My inspiration and blatant rip-off for this blog came from the presentation fellow Lullabot Dave Reid did at Drupalcon Austin about how to Future-Proof Your Drupal 7 Site. Dave’s presentation was more about what you can do to make your Drupal 7 “ready” where this article is more about showing off Drupal 8 “hotness” that we can use in production today.

Categorieën:

Drupal Easy: DrupalEasy Podcast 135: Deltron 3030 (Ronan Dowling, Backup and Migrate 3.0)

di, 2014/07/22 - 3:09pm
Download Podcast 135

Ronan Dowling (ronan), lead developer at Gorton Studios joins Ted and Mike to talk about all the new features in Backup and Migrate 3.0 including file and code backup and a improved plugin architecture. We also get up-to-speed with Drupal 8 development, review some Drupal-y statistics, make our picks of the week, and ask Ronan 5-ish questions.

read more

Categorieën:

Acquia: Enforcing Drupal Coding Standards During the Software Versioning Process

di, 2014/07/22 - 2:18pm

Cross-posted with permission from Genuine Interactive

Les is a web applications engineer at Genuine Interactive. He is a frequent Drupal community contributor. Genuine’s PHP team works on projects in a range of industries from CPG, B2B, financial services, and more.

Categorieën:

Blair Wadman: Create your first Drupal admin interface

di, 2014/07/22 - 12:34pm

One of the key features of a Drupal module is an admin interface. An admin interface enables you to make a module's settings configurable by a site editor or administrator so they can change them on the fly.

Tags: Drupal Module DevelopmentPlanet Drupal
Categorieën:

PreviousNext: Using Drupal 8 Condition Plugins API

di, 2014/07/22 - 8:03am

Although Drupal 8 has had a Conditions Plugin API for a several months, it wasn't until during DrupalCon Austin sprint we managed to get blocks to use the Conditions Plugin API for block visibility.

The great thing about Condition Plugins, is they are re-usable chunks of code, and many contrib projects will be able to take advantage of them (Page Manager, Panels, Rules anyone?)

In this post, I show how you can create an example Page Message module that uses a RequestPath condition plugin to show a message on a configured page.

Categorieën:

DrupalCon Amsterdam: Come to the Devops Track at DrupalCon Amsterdam

di, 2014/07/22 - 8:00am

So you've finished building a beautiful Drupal website. That means your work is done, right?

Not even close! Building the site is only the beginning: every website needs to be deployed, hosted, monitored, maintained, upgraded, security patched, scaled, and more— and if you start thinking about those things only after finishing your site, you’re bound to run into trouble.

Fortunately, DrupalCon Amsterdam is here to help! We’ll be running a #devops track that will bring devs and ops closer together. We’ll be discussing ways to achieve easier deployments, as well as how to ensure better stability, scalability and security for your big, beautiful Drupal website.

We've got a bunch of awesome speakers with experience in all of the above topics, as well as:

  • managing large sites,
  • doing continuous delivery of applications,
  • automated testing to improve quality
  • ... and many more topics that you should think about when building that beautiful website that can't afford to go down.

    The DrupalCon Amsterdam DevOps track will feature a broad range of talks covering the various technologies used in devops— and we expect it will be a nice counterpart to the traditional Drupal-centric tracks. These DevOps sessions will give you a perfect opportunity to peek into new technologies and talk with the best people working on those solutions.

    Whether you are putting together a small internal application or a large, popular, internet-facing site, your job does not end at the last commit. So join us in learning how to release stronger and better software faster. We’re all in this together, so let’s share the love and learn from each other!

    Categorieën:

    Pagina's