At Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using AI and the insights and lessons they learn along the way. Here, Sammy Gituko, Software Developer, explores how AI supported improvements to the Metatag module by speeding up the discovery, verification, and replacement of broken documentation links across 30+ plugin files from hours to minutes.
My first contribution to the Drupal Metatag module started with what looked like a simple issue: fixing broken external documentation links. The task was logged as Issue #3559765 Fix broken links in the Meta tags section , and at first, it seemed like a quick cleanup job. But the deeper I looked, the more it revealed about the fragility of open source documentation, and how AI can speed up the repetitive parts of technical contribution work while still requiring careful human judgment.
Broken links may not sound exciting, but they highlight a widespread challenge in open source maintenance. Documentation links age fast. Websites vanish. URL structures change without warning. And because the Metatag module contains dozens of plugin files pointing to different sources, even a small fix meant a lot of detail work.
To begin, I scanned the src/Plugin/metatag/Tag/ directory, which contains over 30 plugin files. This was where AI added real value, not by writing code, but by making the background research faster and more structured. I found six that had broken or unreliable links:
metatags.org was returning 404metatags.org was broken, though the RTA link workedcsgnetwork.com calculator had connection errorsFor each broken link, I needed to verify the issue, find a reliable replacement from an authoritative source, confirm it worked and was stable, then update it in the code without disrupting formatting or introducing linting errors.
Checking each file manually would have been tedious. Using AI, I generated efficient grep patterns for discovering URLs across the whole directory, like this suggestion that matched multiple URL styles: https?://|www\. That one line let me identify every external link across 30+ plugin files in minutes.
The next challenge was figuring out which links actually worked. Instead of opening them one by one, AI recommended using a simple curl command to automatically test HTTP status codes:
curl -s -o /dev/null -w "%{http_code}" "https://example.com"
This approach let me quickly categorize links as 200 (working), 404 (broken), or 301 (redirects), giving me a precise list of which needed attention.
When replacing links, AI helped search for credible alternatives, suggesting sources like MDN, W3C, IETF, or Google Search Central. It also helped compare multiple options and recommend the best one.
Despite its efficiency, AI couldn’t make every decision. Some choices depended on contextual understanding, deciding whether a replacement even made sense.
Two plugin files, Standout.php and NewsKeywords.php, both referenced Google News documentation that no longer existed. AI surfaced generic help pages, but none were relevant. Since the tags were already marked @deprecated, I chose to remove the links entirely. This was a judgment call informed by understanding the code’s context and the importance of avoiding misleading or obsolete references.
In Rating.php, the existing RTA link technically worked but wasn’t reader-friendly. The AI proposed a few options, but ultimately, I picked Wikipedia’s page on content rating systems. It included the RTA standard, offered better context, and felt more accessible, a human decision about user experience, not just URL accuracy.
Several clear themes came out of this contribution:
metatags.org and csgnetwork.com can disappear or restructure, breaking countless references.The final patch replaced or removed all broken documentation links:
Fixed with authoritative replacements:
SetCookie: MDN documentationGoogle: Google Search CentralExpires: IETF RFC 1123Rating: WikipediaRemoved (no suitable or relevant replacements):
Standout : Google News documentation removedNewsKeywords: Google News documentation removedThe workflow became smoother, faster, and easier to reproduce. Using AI to handle repetitive validation tasks allowed me to focus my attention on decisions that actually required human reasoning.
This contribution showed how AI can accelerate contribution workflows without replacing the thoughtful judgment that open source development depends on. By blending AI-assisted discovery with context-aware decision-making, contributors can move faster and still produce work that’s accurate, accessible, and maintainable.
Maintaining external documentation links might never be glamorous, but it’s a perfect example of how AI can make quality improvements faster and more sustainable, one verified link at a time.
This post is part of Tag1’s AI Applied content series, where we share how we're using AI inside our own work before bringing it to clients. Our goal is to be transparent about what works, what doesn’t, and what we are still figuring out, so that together, we can build a more practical, responsible path for AI adoption.
Bring practical, proven AI adoption strategies to your organization, let's start a conversation! We'd love to hear from you.
read moreThere is a big party happening at DrupalCon Chicago, and I can't wait.
On March 24th, we're celebrating Drupal's 25th Anniversary with a gala from 7–10 pm CT. It's a separate ticketed event, not included in your DrupalCon registration.
Some of Drupal's earliest contributors are coming back for this, including a few who haven't attended DrupalCon in years. That alone makes it special.
If you've been part of Drupal's story, whether for decades or just a few months, I'd love for you to be there. It's shaping up to be a memorable night.
The dress code is "Drupal Fancy". That means anything from gowns and black tie, to your favorite Drupal t-shirt. If you've ever wanted an excuse to dress up for a Drupal event, this is it!
Tickets are $125, with a limited number of $25 tickets underwritten by sponsors so cost isn't a barrier. All tickets must be purchased in advance. They won't be available at the door. Registration closes March 18th, so grab your tickets soon.
Organizations can reserve a table for their team. Even better, invite a few contributors to join you. It's a great way to give back to the people who helped build what your business runs on.
For questions or sponsorship opportunities, please reach out to Tiffany Farriss, who is serving as Gala Chair and part of the team coordinating the celebration.
Know someone who should be there? Share this with them.
What matters most is that you're there. I can't wait to celebrate together in Chicago.
read moreA new alpha experimental "Admin" theme just landed in Drupal 12 dev (and 11 dev) which is a merge of the Claro and Gin themes. Gin historically extended Claro which caused complications on both sides. The merged theme allows to iron out things much faster and more effectively without duplication of efforts in two themes. Going forward the plan is for "Admin" to replace Claro. Until "Admin" becomes stable, Claro will remain the default admin experience. https://www.drupal.org/project/drupal/issues/3556948
read moreMore good news in Drupal 12 development. Long time in the making, the Navigation module just replaced Toolbar as the default navigation experience in the upcoming Drupal version. Not only more customisable, the new UI is also faster to use even with deep administration trees. https://www.drupal.org/project/drupal/issues/3575171
read moreHistoric moment in Drupal core! Migrate Drupal and Migrate Drupal UI will not be in Drupal 12 anymore. These are core modules dedicated to migrating Drupal 6 and Drupal 7 sites to core. Drupal 7 was end of life on January 5, 2025 while Drupal 6 EOL was February 24th 2016. The modules will still be in Drupal 11 core until its end of line expected at the end of 2028. See https://www.drupal.org/node/3466088 for issues around all deprecated modules and themes.
read moreToday we are talking about The Good and the Bad of AI , How our panel feels about AI , and you guessed it more AI with guest Scott Falconer. We'll also cover Field Widget Actions as our module of the week.
For show notes visit: https://www.talkingDrupal.com/542
TopicsScott Falconer - managing-ai.com scott-falconer
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Catherine Tsiboukas - mindcraftgroup.com bletch
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
Commerce Core 3.3.0 completely reimagines how merchants interact with orders in the administrative back end. Common order management tasks are accessible from the order view page, and the edit tab will generally no longer be necessary.
This release resolved 102 issues, including bugs and feature requests, and the time was right to tackle longstanding order management requests we’ve heard from the merchants we support. Roadmap influence is a key benefit of working directly with Centarro on your Drupal Commerce projects. 🤓
Orders in Drupal Commerce have always been collections of data entities of varying types. It’s all highly structured and enables complex fulfillment workflows, but managing those various entities was a fragmented experience.
To edit the billing address, you'd head to the order edit form, but you could only view the shipping address there. To change the shipping address, you'd navigate to the shipments tab and edit the relevant shipment(s) there instead. Updating order items meant working through an inline entity form on the order edit page, but the impact on pricing was opaque until submitted. Payment details weren't readily visible.
Each of these tasks used a different form, which meant different administrative contexts for what a CSR really regards as one transaction.
The new order view page is a dashboard for everything related to the transaction. Adding, editing, and deleting order items, shipments, and billing profiles is now handled through modal dialogs that open directly on the order view page.
Read more read moreAcross the PHP ecosystem, a hard conversation is beginning to take shape. In a recent opinion piece, Ashraf Abed challenges four major open-source communities—Drupal, Joomla, Magento, and Mautic—to confront a shared reality: slower growth, tighter budgets, and a thinning contributor base. All four are PHP-driven, Composer-based, and built on open-source collaboration. Each has solved complex problems at scale. Yet they now compete not only with proprietary SaaS platforms but also with a broader shift toward consolidation, platform ecosystems, and AI-assisted development that lowers switching costs for engineers.
The argument is not about merging code into a single technical stack. It is about strategic alignment. Fragmentation means four marketing engines, four leadership structures, four roadmaps, and parallel efforts solving overlapping problems. Agencies struggle to hire, contributors stretch across projects, and enterprise buyers hesitate when long-term sustainability feels uncertain. As experienced PHP developers move more easily between frameworks, the historical barriers between communities are no longer purely technical—they are cultural and organisational.
The risks are real. Governance models differ. Brand identity runs deep. Millions of production sites require long-term security and stability. No transition would be simple, and no decision would satisfy everyone. But dismissing the conversation outright may be shortsighted. Open source thrives on bold thinking, especially when the status quo shows signs of strain. If the PHP ecosystem wants to strengthen its talent pipeline and competitive position for the next decade, serious dialogue about collaboration, specialisation, or even partial consolidation deserves attention.
With that context, here are the major stories from last week.
We acknowledge that there are more stories to share. However, due to selection constraints, we pause here for this issue. For timely updates, follow us on LinkedIn, Twitter, Bluesky, and Facebook. You can also join us on Drupal Slack at #thedroptimes.
Thank you.
Alka Elizabeth
Sub-editor
The DropTimes
We heard you... and we want to hear from more of you!
The MidCamp 2026 Call for Sessions has been extended. The new deadline is March 13, 2026.
If you had a session idea brewing but didn't quite get it across the finish line, now's your window. We extended the deadline because we want a lineup that reflects the full range of people who use, build, and care about Drupal — and we're not there yet without you.
MidCamp sessions are open to all skill levels and all corners of the Drupal ecosystem. Whether you're a developer with a deep technical dive, a project manager with hard-won lessons, a designer with a perspective the community needs, or an end user who figured something out the hard way — there is a place for your session at MidCamp.
We're especially interested in talks around:
Not sure if your idea fits? Submit it anyway. We'd rather review more proposals than miss a great talk.
Session submissions are open now through March 13, 2026.
Need help shaping your proposal? Join the #speakers channel on the MidCamp Slack — there are people there who will help you get it over the finish line.
Slack: https://mid.camp/slack
After the submission window closes, our review team will evaluate proposals and notify selected speakers by April 9, 2026. Selected speakers will have until April 15 to confirm, and the full schedule will be published April 16.
MidCamp 2026 is May 12–14 in Chicago. We hope to see you on stage.
read moreThis month I gave myself one job to do: redesign the Scarfolk theme.
Do you have a favorite restaurant with a “secret” menu item? Well, DrupalCon has its own secret. And, I’m spilling the beans. If you ask any DrupalCon veteran, what the best thing about the events are, they’ll say, “The Hallway Track”. Huh?
The “Hallway Track” is the space around and between official schedule items. This might be in the actual hallway, in the sponsor floor, at the parties, or even in a taxi ride to the airport.
Space like this lets serendipity happen. You might get bored and join a conversation and make new friends. You might hear of a problem, and think of a new business idea. Or…
I reached out to several friends to get hear some stories about their experiences in the hallway track
Nikki Flores tells about how she ran into a colleague at DrupalCon and became fast friends!
I had worked with her for almost 2 years, had seen pictures of her family and her dog and her vacations. We had always been connecting weekly and sometimes twice a week on our teleconferences. I never saw her in person until she called my name from across the hall at DrupalCon. When we saw each other, we were so excited because we recognized each other's faces!
Carlos Ospina tells the story about how he took his son to DrupalCon, and that led to the genesis of the IXP program.
I wanted my son to understand why I love this community so much, so we flew him out to Seattle. I told him I knew a lot of people there, but since it was contribution day, there would not be much time to socialize.
After COVID, he agreed to join us again for Portland in 2022. The Sunday before the event, we met some friends for breakfast. I spotted someone I thought I recognized and mentioned it. My son teased me, saying it was probably just because I think I know everyone at DrupalCon.
We sat down, and in the middle of breakfast Eduardo Telaya walked by our table. I called out to him, and he came over. We hugged, and suddenly we were no longer just five people having breakfast. A couple of other friends stopped by to say hello, and our table grew. My son looked at me and said, “So maybe you really do know everyone at DrupalCon.”
I think that moment stuck with him. When we started talking about career options, he agreed to give Drupal a try and came with us to Pittsburgh in 2023 to look for a job. After all, Dad knows everyone, right?
Unfortunately, that was when the hiring slowdown was becoming clear. It was the first time the Drupal Association organized a job fair, and we attended. At one point I had to step away to take a call, and my son did great on his own. He introduced himself, talked to people confidently, and put himself out there. But there were no real opportunities for someone in his position. He had just completed DrupalEasy, had no professional experience, and no background in computer science.
That experience led to conversations with Anilu, and from those conversations the IXP Program was born. It started as a way to help my son get a foothold. He has since moved on from Drupal to explore something different, but the program lives on. We are now approaching 1,750 contribution credits awarded, and six participants have gone through the program.
What began as something personal turned into something that helps others enter the community.
Mike Gifford tells several stories about how he met friends and started his journey to be an Accessibility Maintainer for Drupal Core.
I’ve had so many great conversations with people who have inspired me, challenged me, and made me laugh in the hallway of DrupalCons. Over coffee, lunch or just while trying to charge a device, leaning against the wall.
The first story that came to mind was trying to find Eriol Fox in DrupalCon Vienna. I am not sure what we were using to message each other, but there was a large delay between sending and receiving messages. Then there is the challenge of actually finding each other in these crazy conference centers. Anyways, we had a good time chatting, but she also pointed me to some folks that she had connected with in Japan. I was going to be going there and wanted to find some open source connections while there.
I think it was in DrupalCon Atlanta that I had great conversations with Stephen Mustgrave & Stephen Musgrave. We were all in slightly different breakout groups. I had confused the two of them only a month or two ago and remembered connecting with them and verifying that they are indeed not the same person.
I can’t remember when I ran into Mark Gifford, but it was in some hallway, where we talked about me mostly grabbing the @mgifford in so many new social platforms before he could. I guess he has some right to them.
I actually started contributing to Drupal’s accessibility after a hallway chat. It was some time before Drupal 7 was released, and I remember going up to Webchick and complaining about accessibility errors in Drupal. She turned around and suggested I could do something about it. I don’t know how many thousands of hours I’ve spent on fixing accessibility issues in Drupal since she made that suggestion. Thanks Angie.
Mike Anello intentionally avoided the assignment, but tells a great story about the contribution room!
Forget about the hallway - let’s talk about the contribution room track.
There’s no better way to learn something new and make meaningful personal connections than spending a few hours in a contribution room. There are a few Drupal events each year that I know I won’t be wasting any time listening to over-caffeinated Florida-based front-end developers rant at me about the future of front-end development. Instead, I arrive with an agenda to learn something new about some new Drupal thing by spending time in the contribution room helping to test, write documentation, or work on existing issues.
I can credit this method for supercharging my learning of single directory components, ECA, a good portion of the Drupal AI ecosystem, and more Views internals than I ever wanted (thanks, Lendude!)
At my first Drupal Dev Days (Ghent 2023, IIRC) one of my goals was to use my evolving PhpUnit test-writing skills to use in the contribution area. After talking with a few folks, I was introduced to Len Swaneveld, a core maintainer for the Views module. Len pointed me at a few potential issues to work on, and after reviewing a few of them, I settled on one that seemed like it was completable in a reasonable amount of time. What transpired over the next few weeks will be no surprise to anyone who’s ever worked on core Views code - nothing is simple.
But, the thing that I remember most about that issue is the time that Len spent with me (both in-person and online) mentoring me on some of the darker areas of the Views code base. It gave me an all-new perspective of the module as well as the challenges of maintaining it.
This process, and similar ones related to other areas of Drupal, I knew that I was improving my skills by learning from leaders in the community - all while I was helping them!
Perhaps the most rewarding part of it is the fact that after the event, a personal connection now exists - and it doesn’t feel forced. It is a perfectly natural thing to reach out to these new connections via email or Slack with a little, “it was great getting to know you a few weeks ago at Dev Days; I have a quick question for you…”
Networking is the reason for Drupal events - not presentations (sorry, presenters!)
Michael Richardson tells us how the hallway track led to the creation of DrupalCon Singapore!
For me it would be when I went to DrupalCon Lille with the wild idea of running something like a "DrupalCamp Asia", which would be focused on trying to get folks from all over the continent (and the Pacific) to connect together and share their Drupal stories, cultures, and experience for the first time in nearly 10 years.
Through the power of the hallway track in Lille, I was able to connect directly with sponsors, Drupal Association leadership, and regional community leaders, and over those few days the idea evolved into a fully fledged DrupalCon Asia with sponsors, organisers, and the support of the DA all aligned. What would have taken months to organise online was all put in place in just 3 days and a year later, DrupalCon Singapore was a massive success. I'm not sure that would have been possible without those first conversations half way across the world in Lille.
Baddý Breidert tells us how participating in the DrupalCon prenote led to multiple friendships!
My first DrupalCon was Amsterdam in 2014 and I remember going to that event not knowing anyone. During the Hallway Track I got to know MortenDK that introduced me to a lot of people and from that conference it always became a bit easier to attend DrupalCon. At DrupalCon Dublin 2016, Jam and others from the community invited me to join the pre-note which I gladly accepted. The pre-note always happened before the Driesnote and the purpose of the event was to entertain the keynote attendees and kick-off the conference. The show featured an Irish adventure theme, where the characters attempted to find a “pot of gold” while exploring the concept of “scope” in a humorous, technical, and musical “infotainment” style.
Cristina Chumillas tells how she went outside of the conference to find a magical donut, and brought it back to share!
Soooo on the first DrupalCon in Portland after covid, the day after committing Claro and Olivero, with Lauri, we went for a quick adventure to find a famous doughnut with bacon and maple syrup. At Voodoo Doughnuts.
Anyway, we were at the sprints and were working on Olivero issues, so by the time we left it was about to close. On the way it started raining A LOT and when we arrived they were closing and there were no more doughnuts. But since we were there we took the chance to get inside the shop and asked for it, and they still had one! So we bought it and ended up eating it with 8 people at the sprints.
JD Flynn perfectly wraps up the hallway track in his rendition.
To me, the hallway track is where the magical moments are found. It's where connections are made. It's where friendships begin. Sessions at events are amazing, and should definitely be attended. However, the real inspiration and sparks happen during spontaneous conversations that happen just because you bump into someone and start talking about this idea you've had or this bug you found. Before you know it, you're both sitting with your laptops out and building something together. That doesn't happen while sitting quietly in a session.
The "hallway" track isn't limited to just the hallway of the event's venue either. It carries over to the parties and the after parties where lifetime friendships and memories are formed. It's not an exaggeration to say that most of the people in my life who I consider good friends are good friends because of that spark that happened in the hallway, wherever that hallway might exist. It could be bonding over a drink, shared love of a type of food, randomly bumping into someone who looks familiar outside of the event, or picking the song at karaoke that gets everyone up and dancing. I owe some of the strongest relationships in my life, personally and professionally, to the hallway track.
As JD said, the hallway track is where the magic happens. But how do you find it?
You need to put yourself out there. Sit down at lunch tables where you don’t know anyone, and strike up conversations. Go to the event parties and talk to people in the lines at the bar. Join a trivia team with people that you don’t know!
You might just end up with some serendipity of your own!
Visitors form an impression of a site almost instantly. If those first moments feel smooth, they’ll keep exploring. If not, they’ll quietly close the tab. That challenge is even greater for content-rich websites, where each request can trigger complex rendering behind the scenes.
read moreI hadn't head of WebMCP, but now I'm mildly fascinated by the possibilities.
Got some issues with Drupal workspaces? I got your back.
At Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using Artificial Intelligence and the insights and lessons they learn along the way. Here, Ajit Shinde (Senior Drupal Developer), explores how AI supported his work on the contributed Trash module, including a complex taxonomy hierarchy challenge, and what he discovered about getting real value from these tools.
I have been using AI for work for quite some time now for various personal projects. I was inspired and empowered by Tag1's internal AI workshop, and I started using it extensively for both internal projects and a challenging contrib module assignment. What surprised me most was not AI magically solving hard problems, but how much time it could save on the repetitive parts of the work, especially around writing and iterating on tests. I want to share what I discovered because some of it genuinely surprised me.
Wanting to find an issue to tackle with AI for my own learning experience, I picked up issue #3491947 to add taxonomy support to the Trash module. The Trash module provides soft-delete functionality for Drupal entities and, once configured, it lets you delete entities temporarily and restore them later or purge them permanently if needed. This sounds straightforward enough, but taxonomy terms have a wrinkle that makes things interesting. When you delete a parent term, Drupal core deletes its children too, and this hierarchical deletion creates real challenges for a trash and restore workflow.
This issue had been open for a while with some interesting discussion about the right approach, and I had actually started working on it earlier as part of Tag1's sponsored open source development. I had created an initial MR that enabled trashing terms, but the hierarchy problem was still unsolved, and I wanted to see how AI would handle the complexity.
The module maintainer, Andrei Mateescu (amateescu), had suggested an elegant approach in the issue queue: since cascading deletes happen in the same request, all the deleted terms would have the same timestamp, so we could use that deleted timestamp to restore child terms along with their parent. It sounded promising, and I wanted to test whether it would actually work. So I started down the path, expecting that if I could lean on the shared deleted timestamp, I might avoid storing extra hierarchy data myself.
I started with the Cline extension for VS Code in a “Plan” mode (Figure 1).
Just for fun, I opened with a prompt asking to help me formulate a plan by going through the issue and studying the Trash module code. It could not access the issue directly, which is actually good from a security perspective, but Claude scanned my project and was able to find the Trash module in it, scanned through the code, and presented a decent understanding of how trashing works. I then explained the hierarchical deletion problem in detail, and the AI checked again and reached the same conclusion I had. When I shared Andreii’s suggestion about using the delete timestamp for restore, the AI presented a three-phase plan covering implementation, basic tests, and optional UX improvements.
At first glance it looked reasonable, but I spotted a major flaw immediately. The AI assumed that deleting a term would trash the parent and hard-delete the children, which is simply wrong. Once trash support is enabled, all terms being deleted move to trash including the children, and every bit of logic that followed was built on this broken assumption. This is exactly the kind of thing that makes working with AI both frustrating and fascinating. There were other misunderstandings (hallucinations) too. So, I decided to start fresh.
After a few trials, I used the following detailed prompt in a new session:
Assume a role of an expert Drupal developer developing a custom feature for the contributed Trash module on a Drupal site. We want the module to support "trashing" taxonomy terms, which is currently disabled.
Background/Current State:
The Trash module adds a deleted column to entity data tables to enable soft-delete/trash functionality.
Taxonomy term trashing is currently disabled via a code line that I can remove to re-enable it.
When a taxonomy term is deleted by default in Drupal, its children (if they have only one parent) are also deleted. This complicates restoration logic.
Goals:
Allow taxonomy terms to be trashed (not hard-deleted), leveraging the module’s infrastructure.
Enable correct restoration of trashed terms, including their child terms.
Technical Challenges:
Taxonomy Overview Page Tree Rendering:\n\nThe taxonomy overview page uses the buildTree function to load hierarchical term data via direct database queries. buildTree currently doesn’t account for the trashed (deleted) column, so trashed terms might break the vocabulary page (e.g., inaccessible, errors). Core updates to taxonomy API are not allowed (must solve without modifying Drupal core).
Restoration Logic for Hierarchies:
The existing trash implementation only restores single entities.
When a parent term is restored from trash, I need to also restore child terms that were trashed at the same time.
All terms trashed together share the same deleted value, which can be used to identify them as part of a single trash operation.
Please provide step-by-step guidance for:
Enabling trash support for taxonomy terms.
Where and how to safely re-enable support (removing the disabling line).
Modifying or extending buildTree (or its usage) without core changes to prevent errors from trashed terms.
Approach for filtering out trashed terms when rendering the tree.
If possible, suggest ways to hook, alter, or override tree loading, limited to contributed/custom code.
Implementing restoration logic for hierarchical terms.
How to batch-restore child terms if their parent is restored from trash.
Using the shared deleted value (timestamp/etc.) to identify child terms involved in the same operation.
Any Drupal hooks or architectural suggestions for this process.
I had to do several trial-and-error iterations to come up with this prompt. With that detailed context, the AI performed much better.
This prompt was far more effective than earlier ones because it clearly defined the AI’s role, goals, and constraints from the start.
By identifying the AI as an expert Drupal developer, the prompt aligned its reasoning with real-world Drupal development patterns rather than generic guesses.
Defining clear end goals (enabling trash for taxonomy terms and restoring hierarchical relationships) kept the conversation focused, while listing technical challenges upfront, such as fixing buildTree() behavior without touching core and managing term restoration logic, provided essential guardrails against hallucinations.
Together, these details created a structured context that helped the AI generate more accurate, actionable output instead of speculative or incorrect code suggestions.
It enabled the trash support for taxonomy in configuration by identifying the code. It then tried to fix the listing page queries and failed. I pointed it to use query tags to tag the listing queries and alter them to check if the term is trashed. This was implemented swiftly. The AI implemented a Trash handler for taxonomy terms, which was actually a good decision I had not prompted. But its implementation had a fatal flaw in that it used a class-level array variable for storing the term hierarchy (static caching). This obviously will not work because delete and restore operations happen in separate requests with no persistence between them.
While testing, I found that the trash module automatically trashed the child terms if the parent is deleted. This was expected behavior.
Through old-fashioned debugging and stepping through the deletion flow, I finally understood why the timestamp approach would not work, and it comes down to how Drupal core handles term deletions.
Say you have a hierarchy of terms A and B where A is the parent. When term A is deleted, Term::preSave is called and then the term is deleted. After that, Term::postDelete is called to delete the orphaned children. But here is the problem: When the child term B is about to be deleted, Term::preSave is called again, and that function resets the parent to root before the term gets trashed. This removes any trace of the previous hierarchy entirely.
So even though both terms end up with the same deleted timestamp, we have lost the parent-child relationship by the time they are in the trash. We will need to store the hierarchy data somewhere else, probably with the term itself as others had suggested in the issue. I posted this finding back to the issue queue because it changes the direction of the solution. After that, the maintainer suggested that we pause and reconsider the overall direction of the fix before writing more implementation code. I didn’t want to just stop working at that point, so I decided to focus on something that would still add value: a solid set of tests around the behavior we had uncovered.
I reached this conclusion through traditional debugging, but AI became useful once the solution path was unclear, because it could help me quickly generate and refine tests instead of spending my time on boilerplate.
With the understanding that solving hierarchical restore is tricky and needs more intervention from the module’s maintainer, and that the direction of the fix was on hold, I decided to change how I contributed. Instead of trying to force a solution, I asked AI to help enumerate test scenarios and create tests for trashing, restoring, and purging terms, and it came up with a decent list of scenarios (it applied trashing, restoring, and purging):
I specifically requested creating atomic tests where one test does one thing. In previous iterations I had noticed it crammed multiple assertions into single tests, but this time it created a comprehensive test class covering all scenarios properly. AI generated the initial test code and structure, and my job was to review, adjust, and guide it toward clean, atomic tests that matched how we actually use the Trash module.
This felt like a real breakthrough in how to work with these tools, namely: you must provide as much context, detail, and guidance as possible in order to get good results.
It tried to run these tests, failed a few times, and I had to step in to explain that the project uses DDEV and tests need to run inside the container. Even then it took three or four more attempts to get the tests running, which tested my patience a bit. As expected, all tests except the hierarchical restore passed.
None of this moved the core solution forward directly, but it still added real value. By using AI to design scenarios and generate most of the test code, I could keep making progress without sinking a lot of time into repetitive work. Those tests now act as a reusable safety net for future changes, so when the direction is finally decided, we will already have a solid foundation to build and iterate on.
Without AI, I probably would have stopped at one or two manual checks or a much smaller test suite, simply because of the time investment. With AI handling the boring parts of writing and reshaping tests, I could afford to cover more scenarios and refine them, even though the underlying solution is still on hold.
The Trash module work showed me how AI can help with testing around complex problems, but internal projects are where it really clicked for me. For internal projects, the AI proved more immediately useful, and I got genuinely excited about the possibilities here. For example, on a separate internal project, I needed to create a text filter plugin for rich text formats in Drupal that checks for special unordered lists and replaces them with a Storybook component.
I crafted my prompt carefully based on previous attempts where the AI went wild and tried to implement everything from scratch including the filter plugin, the component, and all the theme functions. In a new session I prompted explicitly that I did not need a new theme function, that it should just use twig include from Storybook, and that it needed to handle nested lists properly since they are multi-level.
It generated a decent filter plugin on the first try, and while there were mistakes of course, I guided it toward modern PHP patterns like attributes and constructor property promotion and PHPStan compliance. A couple of project-specific issues needed manual fixes, but the core implementation was solid.
The tests went really well too. I started with a Kernel test, then tried a Functional test, and eventually settled on Unit tests on the tech lead's suggestion. The Unit test was precise and to the point, and I think this is where AI really shines because the context is clear and limited in unit testing scenarios.
As someone who is mostly a backend developer, this is where I found the use of AI to be genuinely delightful. It quickly understood the project's styling approach, detected that we were using Tailwind, and suggested fixes with basic prompts. No deep context needed, just quick wins that saved me from the usual frustration of wrestling with frontend styling.
Together, these internal projects reinforced the same pattern I saw with the Trash module: AI is most helpful when I give it a narrow, well-defined problem and let it handle the repetitive parts while I focus on the decisions.
The biggest realization I had is that you need to treat AI as a junior developer who needs clear guidance and supervision. It never replaced my debugging or architectural judgment, but it did make it cheaper and faster to handle the repetitive parts of the work. Providing context aggressively makes an enormous difference, and the more specific you are about the codebase structure the better the results turn out. In my case that meant offloading a lot of test boilerplate, trying out different test structures (Kernel, Functional, Unit), and iterating on scenario coverage without feeling like I was wasting time. Limiting where the AI can scan also helps manage the context window size, which matters more than I initially realized.
I found that spending time in Plan mode before rushing to Act pays off tremendously because it lets the AI think through the problem first. Keeping scope small is also critical since large ambitious prompts lead to hallucinations and broken assumptions while small well-directed tasks actually get completed correctly.
When the AI starts hallucinating, restoring to a previous checkpoint or starting fresh with the same context works much better than trying to correct a conversation that has gone off the rails. I learned this the hard way through several frustrating sessions.
Managing the “context window” for the AI is important. Sometimes starting a fresh conversation makes more sense than continuing with the existing one. I made sure that I exported the context in each conversation, adjusted it and carried that to the next session.
Used this way, AI feels like a junior developer who is great at cranking out tests and repetitive scaffolding, while I stay focused on debugging, design decisions, and understanding the problem. The thing that excites me most is that using AI we can finally do test-driven development without it feeling like a burden. That alone makes learning to work with these tools worthwhile, and I am looking forward to exploring this further on future projects.
This post is part of Tag1’s AI Applied content series, where we share how we're using AI inside our own work before bringing it to clients. Our goal is to be transparent about what works, what doesn’t, and what we are still figuring out, so that together, we can build a more practical, responsible path for AI adoption.
Bring practical, proven AI adoption strategies to your organization, let's start a conversation! We'd love to hear from you.
Image by Cline
read moreAt LocalGov Drupal Dev Days in London earlier this month, the topic came up of releasing custom project code as contrib modules.
There were many people in the room who said they had custom code in their site codebase that they planned to release as contrib modules, but needed to find the time to get it ready. I heard people mention the work that they had left to do for this, and it sounded very familiar: generalise the functionality, remove client-specific code, remove client-specific strings.
This reminded me of a session I did at Drupal Camp London way back in 2014, on this very topic: releasing more code from your codebase, to lower the amount of custom code and share more with the community. Since then, I've gone on to release many more contrib modules, and the introduction of more powerful APIs and systems with Drupal 8 has added to what's possible, so I thought I'd revisit my thoughts on different ways to approach this. My presentation was on the 'why' as well as the 'how', but I'll assume you know that part already.
The first thing to say is that as with tests or accessibility, it's much easier to write contributable code from the start rather than rework it later.
Fundamentally though, whether to plan from the start or retrofit, the baic principle is that you want your code to be split into two layers: the contrib, and the custom. Think of it as a contrib cake with custom icing on top.
The tricky part is where to put the dividing line. It's not always clear how much of your functionality is generic and applicable to other use cases and other clients.
I always err on the side of putting too much in contrib, and offsetting the possibility that the contrib code is too specific with customisability.
But how do we actually slice it up?
Plugins are one of the most powerful ways of switching behaviour in Drupal. Defining your own plugin type allows you to design exactly which parts of the code are handed over to the plugin, and in as many places as you want, by adding more methods to your plugin's interface.
If the methods in the plugin start to look unrelated, you can always add a second plugin type. And if the amount of boilerplate needed for a plugin type is offputting, Module Builder generates it all for you.
It's worth also considering the lesser-known sibling of attribute plugins, the YAML plugin. If you only want to change strings or parameters, then you can put all that into YAML instead of a whole class. (And YAML plugins do allow custom classes for oddball cases.)
With a plugin system, you need a way to set the plugin to use. There are two ways you could do this: if it's a single plugin that you select, use a plain config setting. If it's a pattern that you might want several of, use a config entity that holds the reference to the plugin. This requires a fair bit of boilerplate code, but there are examples in contrib that you can crib from, such as Flag and Action Link.
And remember that there are other systems that allow ways to select a plugin: field formatters and widgets, Views handlers for fields and filters and so on, paragraph behaviours, and more.
Twig templates are a great way to customise output from your module. You can change strings, rearrange elements, and add CSS classes for styling.
You'll need to define the theme hook using hook_theme(), and define the variables the template uses. Then, provide a neutral version of the template in the contrib module's /templates folder, and override it in your site's theme.
For forms, use hook_form_alter() to change the labels of elements and their order.
Or you can even add extra form elements, and handle their values in a custom submit handler.
If your alterations start to get too complex, consider using a plugin that you pass the form to for customization.
Simplest of all is to use config to override values, whether they are strings or parameters.
Define the config schema for the settings, add a default config to the module's config/install, and then override it in your project's config.
It's worth looking at existing APIs that allow a custom module or theme to alter functionality. For example, in core, field widgets can be altered with hook_field_widget_single_element_form_alter() and field formatters with hook_field_formatter_third_party_settings_form(). And all sorts of unspeakable things can be done to Views with field, filter, argument, and sort handlers, and display extender plugins.
If you're short on time and resources to work on splitting your cake up, there are some shortcuts you can take. It's what I call the 'code and run' method of releasing code: the contrib module is incomplete, but released in the hope that the next person who finds it useful will pick it up and move it forward.
My opinion on this is that releasing some code, even if it's half-baked, is better than not releasing code at all, as long as you clearly explain on the project page that the module has things missing or incomplete, and leave a trail in the code in the form of comments and placeholders.
Do you need help with preparing custom code to be released as a contrib project? It's a great way to get more presence for you or your organisation. I'm available for hire - contact me!
We're excited to celebrate you -- our future speakers! If you've got an idea for a session, now's the time to get involved in MidCamp 2026, happening May 12-14 in Chicago.
Since 2014, MidCamp has hosted over 300 amazing sessions, and we're ready to add your talk to that legacy. We're seeking presentations for all skill levels, from Drupal beginners to advanced users to end users and business professionals!
For full submission details and guidelines, visit: midcamp.org/events/2026/how-submit-session
Looking to connect with the Drupal community? Sponsoring MidCamp is the way to do it! Whether you're recruiting talent, growing your brand, or simply supporting the Drupal ecosystem, MidCamp sponsorship offers great value. Act early to maximize your exposure!
Ready to submit your session? Click away and let's make MidCamp 2026 unforgettable!
read moreFor most developers, DDEV solves a common challenge: making sure that each developer has a consistent, stable local environment for building their web application. We had more and more success with DDEV at Lullabot, but another related issue kept coming up: how do we grow and develop our use of continuous integration and automated testing while avoiding the same challenges DDEV solved for us?
A typical CI/CD pipeline is implemented using the tools and systems provided by the CI service itself. For example, at a basic level you can place shell commands inside configuration files to run tests and tools. Running those commands locally in DDEV is possible, but it's a painful copy/paste process. If you're a back-end or DevOps engineer, odds are high you've wasted hours trying to figure out why a test you wrote locally isn't passing in CI – or vice versa!
As a first step, we used Task to improve our velocity. Having a unified task runner that works outside PHP lets us standardize CI tasks more easily. However, this still left a big surface area for differences between local and CI environments. For example, in GitHub, the shivammathur/setup-php action is used to install PHP and extensions, but the action is not identical to DDEV. Underlying system libraries and packages installed with apt-get could also be different, causing unexpected issues. Finally, there was often a lag in detecting when local test environments broke because those changes weren't tested in CI.
This brought us to using DDEV for CI. It's a great solution! Running all of our builds and tasks in CI solved nearly every "it works on my machine" problem we had. However, it introduced a new challenge: CI startup performance.
Unlike using a CI-provider's built-in tooling, DDEV is not typically cached or included in CI runners. Just running the setup-ddev action can take up to a minute on a bad day. That doesn't include any additional packages or Dockerfile customizations a project may include. At Lullabot, we use ddev-playwright to run end-to-end tests. Browser engines and their dependencies are heavy! System dependencies can be north of 1GB of compressed packages (that then have to be installed), and browsers themselves can be several hundred MB. This was adding several minutes of setup time just to run a single test.
Luckily, based on our experience building Tugboat, we knew that the technology to improve startup performance existed. When WarpBuild was announced with Snapshot support in 2024, we immediately started testing it out. We theorized that the performance improvement of snapshots would result in significant startup time improvement. Here's how we set it up!
We had three parallel jobs that all required DDEV:
Note that our Playwright tests themselves run in parallel on a single worker as well, using lullabot/playwright-drupal. This allows us to optimize the additional startup time for installing Drupal itself (which can't be cached in a snapshot) across many tests.
After linking WarpBuild to our GitHub repository, we had to update our workflows. For the full combined example, see the repository at ddev/ddev-ci-warpbuild-example.
Here is an example representing the changes we made to our workflow after enabling Snapshots in the WarpBuild UI. At a high level, here's the flow we want to create with our GitHub jobs:
flowchart TD
A[determine-snapshot: <br>Hash key files] --> B[Request WarpBuild runner<br>with snapshot key]
B --> C{Snapshot exists?}
C -->|"Yes (fast path)"| D[Restore snapshot<br>DDEV pre-installed]
C -->|"No (first run)"| E[Install DDEV, browsers,<br>and dependencies]
D --> F[Start DDEV & run tests]
E --> F
F --> G{First run?}
G -->|Yes| H[Clean up & save snapshot]
G -->|No| I[Done!]
H --> I
Start with a basic workflow to trigger on pull requests and on merges to main.
name: "WarpBuild Snapshot Example"
on:
push:
branches: [main]
pull_request:
Before running our real work, we need to know what snapshot we could restore from. We start by creating a hash of key files that affect what gets saved in the snapshot. For example, if Playwright (and its browser and system dependencies) are upgraded by Renovate, we want a new snapshot to be created. Extend or modify these files to match your own project setup.
jobs:
determine-snapshot:
# This could be a WarpBuild runner too!
runs-on: ubuntu-24.04
outputs:
snapshot: ${{ steps.snapshot-base.outputs.snapshot }}
steps:
- uses: actions/checkout@v6
- name: Determine Snapshot Base
id: snapshot-base
run: |
set -x
hash=$(cat .github/workflows/test.yml test/playwright/.yarnrc.yml test/playwright/yarn.lock | md5sum | cut -c 1-8)
echo "snapshot=$hash" >> $GITHUB_OUTPUT
shell: bash
WarpBuild needs some additional configuration to tell GitHub Actions to use it as a runner. This could be as simple as runs-on: 'warp-<runner-type>' if you aren't using snapshots. WarpBuild has many runner options available, including ARM and spot instances to reduce costs further.
The runs-on statement:
We also switch to the WarpBuild cache (so it's local to the runner) and check out the project. Update the cache paths as appropriate for your project.
jobs:
# other jobs...
build-and-test:
needs: [determine-snapshot]
runs-on:
"${{ contains(github.event.head_commit.message, '[warp-no-snapshot]') &&
'warp-ubuntu-2404-x64-16x' ||
format('warp-ubuntu-2404-x64-16x;snapshot.key=my-project-ddev-1.25.1-v1-{0}', needs.determine-snapshot.outputs.snapshot) }}"
steps:
- uses: WarpBuilds/cache@v1
with:
path: |
${{ github.workspace }}/.ddev/.drainpipe-composer-cache
${{ github.workspace }}/vendor
${{ github.workspace }}/web/core
${{ github.workspace }}/web/modules/contrib
key: ${{ runner.os }}-composer-full-${{ hashFiles('**/composer.lock') }}
- uses: actions/checkout@v6
We need to add logic to either start from scratch and install everything or restore from a snapshot. Since DDEV isn't installed by default in runners, we can use its presence to easily determine if we're running from inside a snapshot or not. We save these values for later use.
jobs:
# other jobs...
build-and-test:
steps:
# ... previous steps ...
- name: Find ddev
id: find-ddev
run: |
DDEV_PATH=$(which ddev) || DDEV_PATH=''
echo "ddev-path=$DDEV_PATH" >> "$GITHUB_OUTPUT"
if [ -n "$DDEV_PATH" ]; then
echo "ddev found at: $DDEV_PATH (restored from snapshot)"
else
echo "ddev not found (fresh runner, will install)"
fi
If ddev exists, we can skip installing it:
jobs:
# other jobs...
build-and-test:
steps:
# ... previous steps ...
- name: Install ddev
uses: ddev/github-action-setup-ddev@v1
if: ${{ steps.find-ddev.outputs.ddev-path != '/usr/bin/ddev' }}
with:
autostart: false
# When updating this version, also update the snapshot key above
version: 1.25.1
At this point, we've got DDEV ready to go, so we can start it and run tests or anything else.
jobs:
# other jobs...
build-and-test:
steps:
# ... previous steps ...
- name: Start ddev
run: |
# Playwright users may want to run `ddev install-playwright` here.
ddev start
ddev describe
- name: Run tests
run: |
ddev exec echo "Running tests..."
# Replace this with one or more test commands for your project.
ddev task test:playwright
Now, tests have passed and we can create a snapshot if needed. If tests fail, we never create a snapshot so that we don't accidentally commit a broken environment.
We shut down DDEV since we're going to clean up generated files. This keeps our snapshot a bit smaller and gives us an opportunity to clean up any credentials that might be used as a part of the job. While we don't typically need a Pantheon token for tests, we do need it for some other jobs we run with DDEV.
jobs:
# other jobs...
build-and-test:
steps:
# ... previous steps ...
- name: Clean up for snapshot
if: ${{ steps.find-ddev.outputs.ddev-path != '/usr/bin/ddev' }}
run: |
# Stop ddev to ensure clean state
ddev poweroff
# Remove any cached credentials or tokens
rm -f ~/.terminus/cache/session
# Clean git state and temporary files
git clean -ffdx
Now we can actually save the snapshot. We skip this if we can since it takes a bit of time to save and upload. There's no point in rewriting our snapshot if it hasn't changed! The wait-timeout-minutes is set very high, but in practice this step only takes a minute or two. We just don't want this step to fail if Amazon is slow.
jobs:
# other jobs...
build-and-test:
steps:
# ... previous steps ...
- name: Save WarpBuild snapshot
uses: WarpBuilds/snapshot-save@v1
if: ${{ steps.find-ddev.outputs.ddev-path != '/usr/bin/ddev' }}
# Using a matrix build? Avoid thrashing snapshots by only saving from one shard.
# if: ${{ matrix.shard == 1 && steps.find-ddev.outputs.ddev-path != '/usr/bin/ddev'}}
with:
# Must match the snapshot.key in runs-on above
alias: "my-project-ddev-1.25.1-v1-${{ needs.determine-snapshot.outputs.snapshot }}"
fail-on-error: true
wait-timeout-minutes: 30
To test, once you have jobs passing, you can rerun them from the GitHub Actions UI. If everything is working, you will see all steps related to installing DDEV skipped.
Note: We don't pin actions to hashes in these examples for easy copypaste, but for security we always use Renovate to pin hashes for us. We would also like to use Renovate Custom Managers to automatically offer DDEV upgrades and keep the version number in sync across all files and locations.
ddev start command.While this seems like a lot of work, it was only about half a day to set up and test – and that was when WarpBuild was in beta, had minimal documentation and some rough edges. We haven't really had to touch this code since. Setting up new projects is an hour, at most.
Do you have other optimizations for DDEV in CI to share? Post in the comments, we'd love to hear them!
read moreHey experienced developers! You know how to tame Drush, charm Composer, debug like a detective, juggle configs, and wrestle with tricky modules. But there’s an event that will max out your RAM with Drupal hacks, insights, and wisdom.
Chicago may be famous for deep-dish pizza, but this spring it’s serving up something even more satisfying: deep dives into Drupal. DrupalCon Chicago 2026 is the place for seasoned developers to sharpen their skills, swap stories, and maybe laugh at a few module mishaps along the way.
It’s a code playground with a side of professional growth — sessions designed to challenge, inspire, and connect. Ready to level up your craft and enjoy a few geeky chuckles? The program is packed with standout sessions, but here are a few you absolutely won’t want to miss.
Drupal Canvas, the new-generation page builder, offers multiple ways to create pages for different audiences. Non-tech users will enjoy intuitive drag-and-drop tools, ready-made components, and even building pages from a prompt to an AI agent. But what’s in it for developers? First of all, it’s Code Components.
JavaScript in Drupal keeps evolving, and Code Components in Drupal Canvas are the latest twist worth watching. First unveiled at DrupalCon Atlanta, they came with a zero‑setup, in-browser editor and instant support for React and Tailwind CSS.
Things have moved fast: data fetching and Next.js-style image optimization are now supported, and experiments with server-side rendering and third-party imports are in progress. And editing isn’t limited to the browser anymore — a new CLI lets you work with Code Components anywhere, opening doors to decoupled frontends and fresh workflows.
Catch Bálint Kléri (balintbrews), the technical lead for JavaScript components in Drupal, in his insightful session, where he will walk through what’s stable, what’s experimental, and what’s next. You’ll discover specific approaches and techniques for working with Code Components.
AI-driven automation is changing the ways organizations handle content personalization, workflows, customer support, and data insights. One of the most exciting tools to emerge from the Drupal AI initiative is AI agents — autonomous systems that carry out tasks, make decisions, and pursue goals on behalf of users.
You can learn more about Drupal’s new AI Agents framework from Marcus Johansson (marcus_johansson). On his drupal.org page, Marcus describes himself simply: “I tinker with AI.” But his “tinkering” is transforming Drupal from the ground up: Marcus leads the Drupal AI initiative in Drupal, shaping its architecture, driving its development roadmap, and steering the future of AI-powered tools in Drupal.
In this session, Marcus will show you how Drupal’s Agents framework lets you create business-specific agents without writing a single line of code. Instead of slogging through implementation details, you’ll see how prompt writing and communication skills can drive the interaction, while Drupal quietly handles the complexity behind the curtain.
Join Marcus as he unpacks what agents are, how the framework was built, and how it connects with the MCP (Model Context Protocol). For experienced developers, this session is a chance to explore a tool that cuts through the noise and unlocks fresh possibilities.
Drupal Canvas is gaining serious momentum, and it deserves a closer look from more than one angle. Alongside the earlier-mentioned session on Code Components, this one is a hands-on exploration of how Canvas works hand in hand with some of Drupal’s most powerful tools.
Ted Bowman (tedbow), a long-time Drupal contributor, will show how Drupal Canvas can be combined with core features like Views and popular contributed modules to build advanced setups — all without writing a single line of code.
You’ll see an exciting demo packed with practical examples: creating dynamic landing pages, formatting structured content with Canvas templates, linking field data to SDC (Single-Directory Components) and Code Component properties, building Views inside Canvas templates, and using template slots to give editors more control.
As Canvas continues to evolve, Ted will also spotlight the latest features and contributed modules that extend its capabilities even further.
Drupal offers many ways of workflow automation. But having tasks quietly carried out in the background — triggered by events, checked against conditions, and completed through actions — is a special kind of magic.
Experienced developers may remember the Rules module that pioneered this idea. Today, its modern successor, the Event–Condition–Action (ECA) module, takes the concept further, reimagined for Drupal’s current ecosystem. With a no-code/low-code approach and graphical modeling tools like BPMN, ECA makes building workflows more intuitive and far less intimidating.
Despite the amazing graphical interface with diagrams for ECA workflows, ECA needed to become even more approachable, especially for people without prior Drupal experience. So after solid real-world use and plenty of feedback from the community, ECA is entering its next phase. In his session, ECA’s creator, Jürgen Haas (jurgenhaas), the creator of ECA, will share how things are going with the revamp.
By lowering the barrier for site builders and project managers, the evolving ECA creates more room for developers to extend, integrate, and scale automation. The refreshed interface makes workflows easier to work with, while the underlying architecture opens fresh opportunities for custom plugins, enterprise integrations, and performance tuning.
Drupal has always had a knack for ambitious site building, but the Recipes Initiative is cooking up something new. Instead of distributions that lacked flexibility, we now have lightweight, composable recipes and site templates that make functionality easier to share, remix, and extend.
For experienced developers, it’s about speeding up the boring parts so you can focus on the interesting ones. Default content APIs and config actions are steadily maturing, and the community is already serving up recipes that cut down setup time while keeping flexibility intact.
Step into this session led by Jim Birch (thejimbirch), a renowned Drupal core committer and initiative coordinator. He will walk you through the progress so far, highlight examples from the community, and demonstrate practical authoring workflows. You’ll leave with a clear sense of how recipes fit into Drupal’s future, how to find and apply them effectively, and how to contribute your own to the growing ecosystem.
Embedding external content in Drupal has become trickier as platforms tighten security rules and content policies. If iframes keep letting you down, this session offers a cleaner, more future-proof approach.
Join Pedro Cambra (pcambra), an experienced Drupal contributor, as he shares practical guidance on embedding external content with oEmbed. He explores how Drupal uses the oEmbed standard together with core Media tools and contributed modules like oEmbed Providers to embed third-party content safely and reliably. You’ll get a clear look at how oEmbed works behind the scenes, which modules fit best for different use cases, and how CKEditor handles embedded objects.
The session also touches on enhancing embeds with authentication or privacy controls and building your own oEmbed resources for custom content. Practical examples keep things grounded, with plenty of tips you can apply right away. If you’ve ever wrestled with embeds or want a more robust setup that plays nicely with modern platforms, this session is well worth your time.
Theming in Drupal is entering a new era, and this training is designed to keep even seasoned developers ahead of the curve. It will be led by Mike Herchel (mherchel) and Andy Blum (andy-blum), key contributors driving theming innovations in Drupal. The training session dives into Single Directory Components, Drupal Canvas, and modern CSS/JS techniques that will shape how we build themes going forward.
Through hands‑on exercises, you’ll learn to craft reusable components, streamline workflows with Storybook, and deliver designs that are fast, accessible, and maintainable.
You’ll pick up strategies to dodge common page‑builder pitfalls and keep your themes flexible for whatever comes next. If you’re ready to sharpen your skills and future‑proof your toolkit, this training belongs on your schedule. It must be noted that training sessions require an additional ticket.
Every DrupalCon has its traditions, and the Driesnote is one of the most anticipated. For developers who spend their days building and maintaining Drupal sites, the Driesnote is a chance to catch a glimpse of what’s emerging in the platform.
Beyond the usual updates, it’s the place to hear the newest features, initiatives, and announcements that set the stage for what’s coming next for Drupal. You’ll discover new tools, architectural changes, and exciting directions for core and contributed projects, all straight from Dries Buytaert. Right in the main auditorium, you’ll catch demos that haven’t been shown anywhere else yet.
This session is Drupal’s roadmap in real time. Experienced developers will walk away with inspiration for their own projects, and an insider’s view of upcoming improvements that could change how we work with the platform.
Wrapping up the developer sessions at DrupalCon Chicago, the vibe is clear: Drupal keeps giving us new tools, and it’s up to us to explore them, stress-test them, and help shape what comes next. Canvas, ECA V2, Recipes, Site Templates, and other cool innovations are all evolving fast, and the fun part is digging into the details to see how they really work.
For experienced developers, the focus shifts away from shiny demos to spotting patterns, catching edge cases, and laughing when the “easy” stuff turns into a rabbit hole. These sessions are a reminder that Drupal’s future is being built in real time — and that we still get to shape it through commits, patches, and the occasional late-night debugging marathon.
Authored By: Nadiia Nykolaichuk, DrupalCon Chicago 2026 Marketing & Outreach Committee Member
Today we are talking about Mautic, marketing automation, and its history with Drupal with guest Ruth Cheesley. We'll also cover Mautic ECA as our module of the week.
For show notes visit: https://www.talkingDrupal.com/541
TopicsRuth Cheesley - ruthcheesley.co.uk RCheesley
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Catherine Tsiboukas - mindcraftgroup.com bletch
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
Every technology cycle has its buzzwords. This one has AI stamped on every roadmap, budget request, and vendor pitch deck. As Nitish Chopra argues, the rush to “integrate AI” into content management systems feels less like strategy and more like panic. Organizations are bolting large language models onto legacy stacks without confronting a harder truth: most digital architectures were never designed to support structured, machine-readable intelligence.
The pattern is becoming predictable across the CMS landscape. Some teams chase quick wins through plugin overload and surface-level integrations. Others pay enterprise premiums for repackaged APIs marketed as innovation. A third group disappears into technical rabbit holes, overengineering AI experiments that never survive contact with production realities. In each case, the failure is architectural. AI is treated as a feature to be installed, not a capability that depends on disciplined data modeling, governance, and system design.
This is where the Drupal ecosystem enters the conversation. For years, Drupal’s insistence on entities, fields, taxonomies, and structured content was criticized as overly complex. Yet those very foundations align with what AI systems require: clean schemas, reusable content objects, and predictable relationships. What once felt rigid now looks intentional. What was labeled pedantic now resembles preparation.
For Drupal builders, agencies, and enterprise stakeholders, the question is not whether to integrate AI, but how to do so without abandoning architectural discipline. AI success in Drupal will not come from chasing wrappers around APIs or cosmetic chatbot add-ons. It will come from doubling down on structured content architecture, refining data governance, and designing composable systems that can support automation at scale. In that sense, the ecosystem’s competitive advantage is not novelty. It is discipline.
This week’s edition reflects on that tension between momentum and method before turning to the stories shaping the ecosystem.
With that, let's shift the spotlight to the important stories from last week.
We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now. To get timely updates, follow us on LinkedIn, Twitter, Bluesky, and Facebook. You can also join us on Drupal Slack at #thedroptimes.
Thank you.
Alka Elizabeth
Sub-editor
The DropTimes
Introduction
There is a subtle bait-and-switch here: I am going to talk about my experience coding with AI in Python, but the lessons learned apply to Drupal and the broader challenges developers face when coding with AI.
Over the past few months, AIs have begun to understand and write code for Drupal, and I want to understand how AI can help me with my Drupal projects. I would be the first to say “Vibe Coding” sounds like something invented in a hipster cafe, but it is here to stay, just like the Frappuccino.
As a Drupal Developer who has written a lot of PHP code over the years, I welcome the opportunity to write less and think more. Call me old-fashioned, but I am a self-taught developer who learned by reading books, even though AI moves so fast that books on the topic are out of date within a year. I decided to look for a book to help with this journey.
A search for “Coding Drupal with AI” yields very few results, yet it is notable that a post titled “Claude Code meets Drupal” by Dries Buytaert, the creator of Drupal, appears on the first screen of results. Sometimes, when learning something new or facing a new challenge, I like to work around the challenge.
For example, when I first started learning Drupal 8 as an experienced Drupal 6/7 developer, I was stumped by Symfony and the OOP patterns being introduced into Drupal, so I spent a few weeks building a Symfony application and then dove deep into Drupal 8. So I decided to approach AI coding in Python because it is a popular programming language that I was curious to learn. I chose to read Coding with AI: Examples in Python by Jeremy Morgan because it focuses first on AI and secondarily on using Python.
My...Read More
read moreIf you’ve been following the rapid rise of AI‑driven chatbots and ‘assistant‑as‑a‑service’ platforms, you know one of the biggest pain points is trustworthy, privacy‑preserving web search. AI assistants need access to current information to be useful, yet traditional search engines track every query, building detailed user profiles.
Enter SearXNG - an open‑source metasearch engine that aggregates results from dozens of public search back‑ends while never storing personal data. The new Drupal module lets any Drupal‑based AI assistant (ChatGPT, LLM‑powered bots, custom agents) invoke SearXNG directly from the Drupal site, bringing privacy‑first searching in‑process with your content.
SearXNG aggregates results from up to 247 search services without tracking or profiling users. Unlike Google, Bing or other mainstream search engines, SearXNG removes private data from search requests and doesn't forward anything from third-party services.
Think of it as a privacy-preserving intermediary: your query goes to SearXNG, which then queries multiple search engines on your behalf and aggregates the results, all while keeping your identity completely anonymous.
The Drupal SearXNG module brings this privacy-focused search capability directly into the Drupal ecosystem. It connects Drupal with your preferred SearXNG server (local or remote), includes a demonstration block, and provides an additional submodule that integrates SearXNG with Drupal AI by offering an AI Agent Tool.
This integration is particularly powerful when combined with Drupal's growing AI ecosystem, including the AI module framework, AI Agents and AI Assistants API.
The most compelling benefit is complete privacy protection. When your Drupal AI assistant uses SearXNG to search the web:
This makes it ideal for organisations in healthcare, government, education and any sector where data privacy is paramount.
By aggregating results from up to 247 search services, SearXNG provides more diverse and comprehensive search results than relying on a single search engine. Your AI assistant gets a broader perspective, potentially finding information that might be missed by individual search engines.
Organisations can run their own SearXNG instance, giving them complete control over:
Getting started is remarkably straightforward thanks to SearXNG's official Docker image, which makes launching a local server as simple as running a single command. This means organisations can have their own private search instance running in minutes, without complex server configuration or dependencies.
The module's AI Agent Tool integration means that Drupal AI assistants can seamlessly incorporate web search into their workflows. Whether it's a chatbot helping users navigate your site or an AI assistant helping content creators research topics, web search becomes just another capability in the assistant's toolkit.
Imagine a corporate intranet where employees use an AI assistant to find both internal documentation and external resources. The assistant can search your internal Drupal content while using SearXNG to find external information, all while maintaining complete privacy about what employees are researching.
Universities and schools increasingly need to protect student privacy. A Drupal-powered learning management system with an AI tutor can use SearXNG to help students research topics without creating profiles of their academic interests and struggles.
Government organisations can leverage AI assistants to help citizens find information and services. Using SearXNG ensures that citizen queries remain private and aren't used for commercial purposes.
The SearXNG Drupal module represents an important step forward in building AI systems that respect user privacy. As AI assistants become more prevalent in web applications, the ability to access current information without compromising privacy will become increasingly valuable.
Drupal's AI framework supports over 48 AI platforms, providing flexibility in choosing AI providers. By combining this with privacy-respecting search through SearXNG, organisations can build powerful, intelligent applications that align with growing privacy expectations and regulations.
Privacy and powerful AI don't have to be mutually exclusive. The SearXNG Drupal module proves that organisations can build intelligent, helpful AI assistants that respect user privacy. Whether you're building internal tools, public-facing applications, or specialised platforms, this module provides a foundation for privacy-first AI that can search the web without compromising user trust.
As data privacy regulations continue to evolve and users become more aware of digital privacy issues, tools like the SearXNG module will become increasingly essential. By adopting privacy-first approaches now, organisations can build user trust while delivering the intelligent, helpful experiences that modern web applications demand.
Find out more and download on the dedicated SearXNG Drupal project page.
Join us THURSDAY, February 19 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)
We don't have anything specific on the agenda this month, so we'll have plenty of time to discuss anything that's on our minds at the intersection of Drupal and nonprofits. Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google document at https://nten.org/drupal/notes!
All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.
This free call is sponsored by NTEN.org and open to everyone.
Information on joining the meeting can be found in our collaborative Google document.
While Artificial Intelligence is evolving rapidly, many applications remain experimental and difficult to implement in professional production environments. The Drupal AI Initiative addresses this directly, driving responsible AI innovation by channelling the community's creative energy into a clear, coordinated product vision for Drupal.
In this article, the third in a series, we highlight the outcomes of the latest development sprints of the Drupal AI Initiative. Part one outlines the 2026 roadmap presented by Dries Buytaert. Part two addresses the organisation and new working model for the delivery of AI functionality.
Authors: Arian, Christoph, Piyuesh, Rakhi (alphabetical)
Dries Buytaert presenting the status of Drupal AI Initiative at DrupalCon Vienna 2025
To turn the potential of AI into a reliable reality for the Drupal ecosystem, we have developed a repeatable, high-velocity production model that has already delivered significant results in its first four weeks.
To maximize efficiency and scale, development is organized into two closely collaborating workstreams. Together, they form a clear pipeline from exploration and prototyping to stable functionality:
This structure is powered by a Request for Proposal (RFP) model, sponsored by 28 organizations partnering with the Drupal AI Initiative.
The management of these workstreams is designed to rotate every six months via a new RFP process. Currently, 1xINTERNET provides the Product Owner for Product Development and QED42 provides the Product Owner for Innovation, while FreelyGive provides core technical architecture. This model ensures the initiative remains sustainable and neutral, while benefiting from the consistent professional expertise provided by the partners of the Drupal AI Initiative.
The professional delivery of the initiative is driven by our AI Partners, who provide the specialized resources required for implementation. To maintain high development velocity, we operate in two-week sprint iterations. This predictable cadence allows our partners to effectively plan their staff allocations and ensures consistent momentum.
The Product Owners for each workstream work closely with the AI Initiative Leadership to deliver on the one-year roadmap. They maintain well-prepared backlogs, ensuring that participating organizations can contribute where their specific technical strengths are most impactful.
By managing the complete development lifecycle, including software engineering, UX design, quality assurance, and peer reviews, the sprint teams ensure the delivery of stable and well-architected solutions that are ready for production environments.
The work of the AI Initiative provides important functionality to the recently launched Drupal CMS 2.0. This release represents one of the most significant evolutions in Drupal’s 25-year history, introducing Drupal Canvas and a suite of AI-powered tools within a visual-first platform designed for marketing teams and site builders alike.
The strategic cooperation between the Drupal AI Initiative and the Drupal CMS team ensures that our professional-grade AI framework delivers critical functionality while aligning with the goals of Drupal CMS.
The initial sprints demonstrate the high productivity of this dual-workstream approach, driven directly by the specialized staff of our partnering organizations. In the first two weeks, the sprint teams resolved 143 issues, creating significant momentum right from the first sprint.
Screenshot Drupal AI Dashboard
This surge of activity resulted in the largest regular patch release in the history of the Drupal AI module. This achievement was made possible by the intensive collaboration between several expert companies working in sync. Increased contribution from our partners will allow us to further accelerate development velocity, improving the capacity to deliver more advanced technical features in the coming months.
Screen recording Agents Debugger
While the volume of work is significant, some new features stand out. Here are a few highlights from our recent sprint reviews:
Our success so far is thanks to the companies who have stepped up as Drupal AI Partners. These organizations are leading the way in defining how AI and the Open Web intersect.
A huge thank you to our main contributors of the first two sprints (alphabetical order):
We invite further participation from the community. If your organization is interested in contributing expert resources to the forefront of AI development, we encourage you to join the initiative.
Now that some of the projects that opted-in for GitLab issues are using them, they are getting real world experience with how the issue workflow in GitLab is slightly different. More and more projects are being migrated each week so sooner or later you will probably run into the following situations.
When creating issues, the form is very simple. Add a title and a description and save, that's it!
GitLab has different work items when working on projects, like "Incidents", "Tasks" and "Issues". Our matching type will always be "Issue". Maintainers might choose to use the other types, but all integrations with Drupal.org will be made against "Issue" items.
As mentioned in the previous blog post GitLab issue migration: the new workflow for migrated projects, all the metadata for issues is managed via labels. Maintainers will select the labels once the issue is created.
Users without sufficient privileges cannot decide things like priority or tags to use. Maintainers can decide to grant the role "reporter" to some users to help with this metadata for the issues. Reporters will be able to add/edit metadata when adding or editing issues. We acknowledge that this is probably the biggest difference to working with Drupal.org issues. We are listening to feedback and trying to identify the real needs first (thanks to the projects that opted in), before implementing anything permanent.
Reporters will be able to add or edit labels on issue creation or edit:
So far, we have identified the biggest missing piece, the ability to mark an issue as RTBC. Bouncing between "Needs work" or "Needs review" tends to happen organically via comments among the participating contributors in the issue, but RTBC is probably what some maintainers look for to get an issue merged.
The previous are conventions that we agreed on as a community a while back. RTBC is one, NW (Needs Work) vs NR (Needs Review) is another one, so we could use this transition to GitLab issues to define the equivalent ones.
GitLab merge requests offer several choices that we could easily leverage.
We encourage maintainers to look at the merge requests listing instead (like this one). Both "draft" vs. "ready" and "approved" are features you can filter by when viewing merge requests for a project.
There are automated messages when opening or closing issues that provide links related to fork management, fork information, and access request when creating forks, and reminders to update the contribution record links to the issue to track credit information.
When referring to a Drupal.org issue from another Drupal.org issue, you can continue to use the [#123] syntax in the summary and comments, but enter the full URL in the "related issues" entry box.
When referring to a GitLab issue from another GitLab issue, use the #123 syntax, without the enclosing [ ].
For cross-platform references (Drupal to GitLab or GitLab to Drupal), you need to use the full URL.
Same as before, we want to go and review more of the already opted-in projects, collect feedback, act on it when needed, and then we will start to batch-migrate the next set: low-usage projects, projects with a low number of issues, etc.
The above should get us 80% of the way regarding the total number of projects to migrate, and once we have gathered more feedback and iterated over it, we'll be ready to target higher-volume, higher-usage projects.
Related blog posts:
For the past months, the AI Initiative Leadership Team has been working with our contributing partners to define what the Drupal AI initiative should focus on in 2026. That plan is now ready, and I want to share it with the community.
This roadmap builds directly on the strategy we outlined in Accelerating AI Innovation in Drupal. That post described the direction. This plan turns it into concrete priorities and execution for 2026.
The full plan is available as a PDF, but let me explain the thinking behind it.
Producing consistently high-quality content and pages is really hard. Excellent content requires a subject matter expert who actually knows the topic, a copywriter who can translate expertise into clear language, someone who understands your audience and brand, someone who knows how to structure pages with your component library, good media assets, and an SEO/AEO specialist so people actually discover what you made.
Most organizations are missing at least some of these skillsets, and even when all the people exist, coordinating them is where everything breaks down. We believe AI can fill these gaps, not by replacing these roles but by making their expertise available to every content creator on the team.
For large organizations, this means stronger brand consistency, better accessibility, and improved compliance across thousands of pages. For smaller ones, it means access to skills that were previously out of reach: professional copywriting, SEO, and brand-consistent design without needing a specialist for each.
Used carelessly, AI just makes these problems worse by producing fast, generic content that sounds like everything else on the internet. But used well, with real structure and governance behind it, AI can help organizations raise the bar on quality rather than just volume.
Drupal has always been built around the realities of serious content work: structured content, workflows, permissions, revisions, moderation, and more. These capabilities are what make quality possible at scale. They're also exactly the foundation AI needs to actually work well.
Rather than bolting on a chatbot or a generic text generator, we're embedding AI into the content and page creation process itself, guided by the structure, governance, and brand rules that already live in Drupal.
For website owners, the value is faster site building, faster content delivery, smarter user journeys, higher conversions, and consistent brand quality at scale. For digital agencies, it means delivering higher-quality websites in less time. And for IT teams, it means less risk and less overhead: automated compliance, auditable changes, and fewer ad hoc requests to fix what someone published.
We think the real opportunity goes further than just adding AI to what we already have. It's also about connecting how content gets created, how it performs, and how it gets governed into one loop, so that what you learn from your content actually shapes what you build next.
The things that have always made Drupal good at content are the same things that make AI trustworthy. That is not a coincidence, and it's why we believe Drupal is the right place to build this.
The 2026 plan identifies eight capabilities we'll focus on. Each is described in detail in the full plan, but here is a quick overview:
These eight capabilities are where the official AI Initiative is focusing its energy, but they're not the whole picture for AI in Drupal. There is a lot more we want to build that didn't make this initial list, and we expect to revisit the plan in six months to a year.
We also want to be clear: community contributions outside this scope are welcome and important. Work on migrations, chatbots, and other AI capabilities continues in the broader Drupal community. If you're building something that isn't in our 2026 plan, keep going.
Over the past year, we've brought together organizations willing to contribute people and funding to the AI initiative. Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors. That is over 50 individual contributors working across time zones and disciplines.
Coordinating 50+ people across organizations takes real structure, so we've hired two dedicated teams from among our partners:
Both teams are creating backlogs, managing issues, and giving all our contributors clear direction. You can read more about how contributions are coordinated.
This is a new model for Drupal. We're testing whether open source can move faster when you pool resources and coordinate professionally.
If you're a contributing partner, we're asking you to align your contributions with this plan. The prioritized backlogs are in place, so pick up something that fits and let's build.
If you're not a partner but want to contribute, jump in. The prioritized backlogs are open to everyone.
And if you want to join the initiative as an official partner, we'd absolutely welcome that.
This plan wasn't built in a room by itself. It's the result of collaboration across 28 sponsoring organizations who bring expertise in UX, core development, QA, marketing, and more. Thank you.
We're building something new for Drupal, in a new way, and I'm excited to see where it goes.
— Dries Buytaert
For the past months, the AI Initiative Leadership Team has been working with our contributing partners to define what the Drupal AI initiative should focus on in 2026. That plan is now ready, and I want to share it with the community.
This roadmap builds directly on the strategy we outlined in Accelerating AI Innovation in Drupal. That post described the direction. This plan turns it into concrete priorities and execution for 2026.
The full plan is available as a PDF, but let me explain the thinking behind it.
Producing consistently high-quality content and pages is really hard. Excellent content requires a subject matter expert who actually knows the topic, a copywriter who can translate expertise into clear language, someone who understands your audience and brand, someone who knows how to structure pages with your component library, good media assets, and an SEO/AEO specialist so people actually discover what you made.
Most organizations are missing at least some of these skillsets, and even when all the people exist, coordinating them is where everything breaks down. We believe AI can fill these gaps, not by replacing these roles but by making their expertise available to every content creator on the team.
For large organizations, this means stronger brand consistency, better accessibility, and improved compliance across thousands of pages. For smaller ones, it means access to skills that were previously out of reach: professional copywriting, SEO, and brand-consistent design without needing a specialist for each.
Used carelessly, AI just makes these problems worse by producing fast, generic content that sounds like everything else on the internet. But used well, with real structure and governance behind it, AI can help organizations raise the bar on quality rather than just volume.
Drupal has always been built around the realities of serious content work: structured content, workflows, permissions, revisions, moderation, and more. These capabilities are what make quality possible at scale. They're also exactly the foundation AI needs to actually work well.
Rather than bolting on a chatbot or a generic text generator, we're embedding AI into the content and page creation process itself, guided by the structure, governance, and brand rules that already live in Drupal.
For website owners, the value is faster site building, faster content delivery, smarter user journeys, higher conversions, and consistent brand quality at scale. For digital agencies, it means delivering higher-quality websites in less time. And for IT teams, it means less risk and less overhead: automated compliance, auditable changes, and fewer ad hoc requests to fix what someone published.
We think the real opportunity goes further than just adding AI to what we already have. It's also about connecting how content gets created, how it performs, and how it gets governed into one loop, so that what you learn from your content actually shapes what you build next.
The things that have always made Drupal good at content are the same things that make AI trustworthy. That is not a coincidence, and it's why we believe Drupal is the right place to build this.
The 2026 plan identifies eight capabilities we'll focus on. Each is described in detail in the full plan, but here is a quick overview:
These eight capabilities are where the official AI Initiative is focusing its energy, but they're not the whole picture for AI in Drupal. There is a lot more we want to build that didn't make this initial list, and we expect to revisit the plan in six months to a year.
We also want to be clear: community contributions outside this scope are welcome and important. Work on migrations, chatbots, and other AI capabilities continues in the broader Drupal community. If you're building something that isn't in our 2026 plan, keep going.
Over the past year, we've brought together organizations willing to contribute people and funding to the AI initiative. Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors. That is over 50 individual contributors working across time zones and disciplines.
Coordinating 50+ people across organizations takes real structure, so we've hired two dedicated teams from among our partners:
Both teams are creating backlogs, managing issues, and giving all our contributors clear direction. You can read more about how contributions are coordinated.
This is a new model for Drupal. We're testing whether open source can move faster when you pool resources and coordinate professionally.
If you're a contributing partner, we're asking you to align your contributions with this plan. The prioritized backlogs are in place, so pick up something that fits and let's build.
If you're not a partner but want to contribute, jump in. The prioritized backlogs are open to everyone.
And if you want to join the initiative as an official partner, we'd absolutely welcome that.
This plan wasn't built in a room by itself. It's the result of collaboration across 28 sponsoring organizations who bring expertise in UX, core development, QA, marketing, and more. Thank you.
We're building something new for Drupal, in a new way, and I'm excited to see where it goes.
— Dries Buytaert
The Drupal AI Initiative officially launched in June 2025 with the release of the Drupal AI Strategy 1.0 and a shared commitment to advancing AI capabilities in an open, responsible way. What began as a coordinated effort among a small group of committed organizations has grown into a substantial, sponsor-funded collaboration across the Drupal ecosystem.
Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors representing over 50 individual contributors working across time zones and disciplines. Together, sponsors have committed more than $1.5 million in combined cash and in-kind contributions to move Drupal AI forward.
The initiative now operates across multiple focused areas, including leadership, marketing, UX, QA, core development, innovation, and product development. Contributors are not only exploring what’s possible with AI in Drupal, but are building capabilities designed to be stable, well-governed, and ready for real-world adoption in Drupal CMS.
Eight months in, this is more than a collection of experiments. It is a coordinated, community-backed investment in shaping how AI can strengthen content creation, governance, and measurable outcomes across the Drupal platform.
As outlined in the 2026 roadmap, this year focuses on delivering eight key capabilities that will shape how AI works in Drupal CMS. Achieving that level of focus and quality requires more than enthusiasm and good ideas. It requires coordination at scale.
From the beginning, sponsors contributed both people and funding so the initiative could be properly organized and managed. With 28 organizations contributing more than 50 people across multiple workstreams, it was clear that sustained progress would depend on dedicated delivery management to align priorities, organize backlogs, support contributors, and maintain predictable execution.
To support this growth, the initiative ran a formal Request for Proposal (RFP) process to select delivery management partners to help coordinate work across both innovation and product development workstreams. This was not a shift in direction, but a continuation of our original commitment: to build AI capabilities for Drupal in a way that is structured, sustainable, and ready for real-world adoption.
To identify the right delivery partners, we launched the RFP process in October 2025 at DrupalCon Vienna. The RFP was open exclusively to sponsors of the Drupal AI Initiative. From the start, our goal was to run a process that reflected the responsibility we carry as a sponsor-funded, community-driven initiative.
The timeline included a pre-proposal briefing, an open clarification period, and structured review and interview phases. Proposals were independently evaluated against clearly defined criteria tailored to both innovation and production delivery. These criteria covered governance, roadmap and backlog management, delivery approach, quality assurance, financial oversight, and demonstrated experience contributing to Drupal and AI initiatives.
Following an independent review, leadership held structured comparison sessions to discuss scoring, explore trade-offs, clarify open questions, and ensure decisions were made thoughtfully and consistently. Final discussions were held with shortlisted vendors in December, and contracts were awarded in early January.
The selected partners are engaged for an initial six-month period. At the end of that term, the RFP process will be repeated.
This process was designed not only to select capable partners but to steward sponsor contributions responsibly and align with Drupal’s values of openness, collaboration, and accountability.
Following the structured selection process, two contributing partners were selected to support delivery across the initiative’s key workstreams.
QED42 will focus on the Innovation workstream, helping coordinate forward-looking capabilities aligned with the 2026 roadmap. QED42 has been an active contributor to Drupal AI efforts from the earliest stages and has played a role in advancing AI adoption across the Drupal ecosystem. Their contributions to initiatives such as Drupal Canvas AI, AI-powered agents, and other community-driven efforts demonstrate both technical depth and a strong commitment to open collaboration. In this role, QED42 will support structured experimentation, prioritization, and delivery alignment across innovation work.
1xINTERNET will lead the Product Development workstream, supporting the transition of innovation into stable, production-ready capabilities within Drupal CMS. As a founding sponsor and co-leader within the initiative, 1xINTERNET brings deep experience in distributed Drupal delivery and governance. Their longstanding involvement in Drupal AI and broader community leadership positions them well to guide roadmap execution, release planning, backlog coordination, and predictable productization.
We are grateful to QED42 and 1xINTERNET for their continued commitment to the initiative and for stepping into this role in service of the broader Drupal community. We also want to acknowledge the strong level of interest in this RFP and the high standard of submissions received, and to thank all participating organizations for the time, thought, and care invested in the process. The level of interest and quality of submissions reflect the caliber of agencies and contributors engaged in advancing Drupal AI.
Both organizations were selected not only for their delivery expertise but for their demonstrated investment in Drupal AI and their alignment with the initiative’s goals. Their role is to support coordination, roadmap alignment, and disciplined execution across contributors, ensuring that sponsor investment and community effort translate into tangible, adoptable outcomes.
Contracts began in early January. Two development sprints have already been completed, and a third sprint is now underway, establishing a clear and predictable delivery cadence.
QED42 and 1xINTERNET will share more details about their processes and early progress in an upcoming blog post.
With the 2026 roadmap now defined and structured delivery teams in place, the Drupal AI Initiative is positioned to execute with greater clarity and focus. The eight capabilities outlined in the one-year plan provide direction. Dedicated delivery management provides the coordination needed to turn that direction into measurable progress.
Predictable sprint cycles, clearer backlog management, and improved cross-workstream alignment allow contributors to focus on building, refining, and shipping capabilities that can be adopted directly within Drupal CMS. Sponsor investment and community contribution are now supported by a delivery model designed for scale and sustainability.
This next phase is about disciplined execution. It means shipping stable, well-governed AI capabilities that site owners can enable with confidence. It means connecting innovation to production in a way that reflects Drupal’s strengths in structure, governance, and long-term maintainability.
We are grateful to the sponsors and contributors who have made this possible. As agencies and organizations continue to join the initiative, we remain committed to transparency, collaboration, and delivering meaningful value to the broader Drupal community.
We are entering a year of focused execution, and we are ready to deliver.
The Drupal AI Initiative is built on collaboration. Sponsors contribute funding and dedicated team members. Contributors bring expertise across UX, core development, QA, marketing, innovation, and production. Leadership provides coordination and direction. Together, this shared investment makes meaningful progress possible.
We extend our thanks to the 28 sponsoring organizations and the more than 50 contributors who are helping shape the future of AI in Drupal. Their commitment reflects a belief that open source can lead in building AI capabilities that are stable, governed, and built for real-world use.
As we move into 2026, we invite continued participation. Contributing partners are encouraged to align their work with the roadmap and engage in the active workstreams. Organizations interested in joining the initiative are welcome to connect and explore how they can contribute.
We have laid the foundation. The roadmap is clear. Structured delivery is in place. With continued collaboration, we are well-positioned to deliver meaningful AI capabilities for the Drupal community and the organizations it serves.
The Drupal Association engineering team is announcing the end of life (EOL) of the first generation of the Automatic Update API, which relies on a different original signing solution for update validation than later versions.
Drupal.org’s APIs for Automatic Updates 7.x-1.x and 8.x-1.x will be discontinued on May 4th, 2026. These versions of automatic updates have been unsupported since the versions of Drupal core they are compatible with, 7 and 8, became unsupported.
Release contents hash files (example) will not be updated and will expire May 12th, 2026. They may be removed after this date with no notice.
In place updates (example) will no longer be generated after May 4th, 2026. These are generated on demand and existing update files will be removed.
APIs for supported versions of Automatic Updates will continue to be supported indefinitely.
Automatic Updates v1 was an important early step toward improving the safety and reliability of Drupal updates. However, its underlying signing and validation model has now been superseded by a more robust and secure approach, with TUF and Rugged.
If you are still using Automatic Updates under the 7.x-1.x or 8.x-1.x branches, now is the time to plan your update to a supported version, or implement custom updates using the supported API with your own CI, etc. Doing so ensures continued support, improved security, and alignment with Drupal’s long-term update strategy.
As DrupalCon Chicago 2026 draws closer, conversations about community are extending beyond sessions, socials, and contributions to include how we care for one another in shared spaces. The Drupal Community Working Group's Community Health Team has been working with event organizers to gather practical, community-informed health and safety guidance that reflects how people actually experience DrupalCon.
The information below provides resources for navigating the conference, the venue, and the city with confidence, while reinforcing Drupal's longstanding commitment to an inclusive, respectful, and supportive community where everyone can show up as their whole selves.
Have questions or concerns about DrupalCon Chicago? Feel free to drop by the Community Working Group's public office hours this Friday, February 13 at 10am ET / 1200 UTC.
Join the #community-health Drupal Slack channel for more information. A meeting link will be posted there a few minutes before office hours.
Updated: February 10, 2026
The information that was previously provided here has been moved to the DrupalCon Chicago Health & Safety page.
For more details, or if these policies are updated, please go to the DrupalCon Chicago official page:
DrupalCon Chicago Health & Safety
The Health and Safety page originated from discussions among the CWG Community Health Team and the DrupalCon Steering Committee after reviewing event websites from other communities in North America. We found the general health and safety information useful and we are working on creating a template for the Planning Drupal Events Playbook for other Drupal events to use moving forward.
The information we gathered for the DrupalCon Chicago Health & Safety page was inspired by the Linux Foundation's Open Source Summit Minneapolis Health & Safety page, APHA Health & Safety page, American Geophysical Union Safety and Security Guidance page, and DjangoCon Travel info page.
Friday, February 20 at Florida DrupalCamp in Orlando and Thursday, March 12 at DrupalCamp NJ in Princeton.
Artificial intelligence is reshaping how we build websites and create content — and the Drupal AI ecosystem is making it easier than ever for site builders to harness that power responsibly.
If you've been curious about integrating AI into your Drupal workflow but aren't sure where to start, this is the workshop for you.
This full-day, hands-on workshop designed for beginners who want to learn the fundamentals of using AI within Drupal. Over the course of the day, you'll work directly with key modules in the Drupal AI ecosystem — including AI Automators, Field Widget Actions, and AI Agents — gaining practical experience with setup, configuration, and real-world content generation techniques.
The emphasis throughout is on responsible AI usage: leveraging these tools to assist (not replace) your effectiveness and efficiency as a developer or content author. You'll explore various setup options, companion modules for auditing and exploring AI capabilities, and walk away with hands-on experience generating content in a thoughtful, responsible manner.
This workshop is aimed at Drupal site builders at the beginner level. No prior Drupal AI experience is necessary. If you can navigate the Drupal admin interface and have a basic understanding of AI prompt engineering, you're ready to dive in.
Basic knowledge of AI prompt engineering, basic Drupal site-building skills, and a paid API account with an AI provider (OpenAI, Gemini, or Anthropic recommended). Alternatively, a free 30-day trial with the Amazee.ai AI provider is available.
Mike Anello (@ultimike) has been teaching Drupal professionally for over 15 years. As co-founder and lead instructor at DrupalEasy, he runs several well-known training programs including Drupal Career Online, Professional Module Development, and Professional Single Directory Components. Mike is a frequent presenter at Drupal events across the United States and Europe, and is deeply involved in the Drupal community as an organizer, code contributor, and documentation contributor. You'll be learning from one of the most experienced Drupal educators in the community.
This full day workshop is being offered at two upcoming DrupalCamps on the US East Coast:
Registration for both events is now open, and space is limited. Don't wait to secure your spot.
Know a colleague, client, or friend who's been wanting to explore AI in Drupal? Please share this article with anyone who might benefit from a hands-on, beginner-friendly introduction to the Drupal AI module ecosystem. The more people in the Drupal community who understand how to use AI responsibly, the stronger our ecosystem becomes.