Even as a seasoned Drupal developer, when upgrading a Drupal 10.x site to Drupal 11.x you can still encounter a number of weird issues with some older legacy code on the site, which had previously (unbeknownst to you) relied on functionality that has now changed in some way, shape or form due to the upgrade to Drupal Core and its dependencies.
I've just gone through a long morning of debug hell with some JavaScript functionality on a client's site that has previously not had any issues throughout the last few Drupal upgrades. For context, this site was originally built as a Drupal 7 site (all those years ago!), has been entirely rebuilt and migrated to Drupal 9, upgraded to Drupal 10 and now is finally being upgraded to Drupal 11(.3).
The site has a fair number of different JavaScript powered Carousels on the site, and at the time of building the site originally, the 'in vogue' solution for responsive carousels was the excellent Slick Carousel (Github link). I won't go into too many details about this package here, but it's worked well and hasn't caused any issues over the years with previous Drupal upgrades.
The package is dependent on jQuery, and with the move to jQuery 4.x in Drupal core (an optional dependency!), this is where the problems started. Now, it's unfair to expect a package with the last official release of nearly 9 years ago to magically work with a newer version of jQuery which didn't exist at the time (the latest version in 2017 was 3.2 but the plugin was designed around jQuery 2.x), but with the aim of trying to keep within budget for this Drupal 11 upgrade, it was decided to not rewrite the entire functionality of the carousels on this site.
This would have involved having to implement an entirely new (ideally non jQuery dependent!) plugin such as Swiper or Glider and re-writing the DOM structure in all the Twig templates that contain the markup for each of the carousels, tweaking the styling for each carousel, and then re-writing the various JavaScript code to work with the new chosen plugin. If the site only had a simple type of carousel in a singular place, then swapping out would have been a suitable option but the carousels on the site in question are quite complex in some instances, so I decided to go with trying to make Slick work.
Even though there hasn't been an official release of the package since October 2017, there has been work in the master branch in the last few years to make the plugin work with more modern jQuery versions. The previous upgrade to use jQuery 3.x in Drupal core (versions 8.5 and above) didn't actually cause any noticeable issues with the Slick plugin (at least in our use case), but the version 4 upgrade as part of Drupal 11 finally caused us issues.
jQuery 4 has finally removed a number of deprecated APIs, one of which is jQuery.type, which is what Slick used internally in multiple parts of it's code.. without the function available anymore, of course, the JavaScript blows up! Luckily, there have been a number of commits to the master branch of Slick in the last few years and one in 2022, which fixed these deprecated calls, allowing it to work properly with more modern jQuery versions. The commit in question was designed to fix jQuery 3.x issues, but by swapping out deprecated API calls in time, it's enabled it to work (mostly) with 4.x as well.
So to get the master version in place instead of the latest official release (which is very old), I made the following change to our package.json (the project in question uses npm) in the list of dependencies, changing:
"slick-carousel": "^1.8.1",
to (the latest commit hash in the master branch):
"slick-carousel": "github:kenwheeler/slick#279674a815df01d100c02a09bdf3272c52c9fd55",
and then re-installed the projects JS dependencies to bring the new version in.
(For reference, we have some code that will take the version installed here into the node_modules folder and copy it into our sites custom theme directory in an appropriate folder, we then define this as a custom Drupal library and include only where needed.)
With the latest version in place, the JS error with the previously used deprecated API's was gone, yay! But now we had other issues to worry about.
The first thing I noticed, now that the JS error was gone, was that a carousel on the homepage looked to be styled incorrectly compared to the version in production. Closer inspection of the DOM revealed that, for some reason now (after not changing any of the invocation calls to the Slick plugin), the slides were now wrapped in two extra divs, instead of the slides themselves getting the slick-slide class (amongst others).
At this point in time, I just assumed this was just a newer behaviour of the updated code of the Slick plugin.. so I set about just making a few quick style changes to the CSS that we had previously not had to take into account of these extra wrapping divs. Later on, I discovered the real reason for these additional wrapping divs... keep reading to find out what it was.
Broken:
Fixed:
This solved this immediate problem, and then I went hunting for the other carousels on the site, which is where things got very interesting (and time-consuming!)
The next carousel's with an apparent problem were on the site's main product page, where one is used on the left hand side (displayed vertically) and acts as a navigation of thumbnails of the 'main' product image gallery that is displayed next to it. The left hand one appeared to be functioning mostly correctly in itself (with a small style issue), but clicking on it would not progress the main slideshow at all! With no JS errors in the console and nothing obviously wrong, cue the debugging rabbit hole....
I won't go into too much detail here of all the paths I went down whilst debugging this but needless to say it involved swapping out the usually minified version of the Slick plugin for the non minified version, using the excellent Chrome JS debugger and stepping through exactly what was going on when Slick was trying to set this carousel up and why it was behaving like it was.
After a while, I finally realised the issue was present when Slick had started initialising itself - after invoking the plugin in the site's code - but before finishing setup. During it's setup, something was going wrong internally with the reference to the slides, which meant they were not copied into the slick-track div and the previously mentioned (new) wrapping divs were there in the DOM, but with no classes on at all.
The JS debugging revealed that the number of $slides being returned from the following Slick code was actually zero!
_.$slides =
_.$slider
.children( _.options.slide + ':not(.slick-cloned)')
.addClass('slick-slide');
This meant the rest of the code that would then do the copying of the slides into the slick-track div (amongst other setup procedures) was failing. But - how could this possibly be? because running some debug code before initialising Slick and checking the number of slides in the DOM was correct...
The devil is in the detail here, and it turns out that the children() selector here is no longer matching my slide container children. But if we didn't change anything about the code that invokes the carousel, why exactly is it broken?
The key lies in the (optional) slide parameter for Slick (which controls the element query to find the slide). The working vertical carousel (amongst others that were working) wasn't using the slide parameter (as the DOM structure for that carousel has the children directly below the slide container), but the broken carousel was using it (due to a specific reason which I won't get into too far into the exact details of, but it involves other markup in the DOM at that specific place for another purpose on the site, so hence needing to specify the selector).
It turns out that if you omit the slide parameter, internally it'll use > div as the selector for slide , and (obviously) if you specify a selector, it'll use that. Because the previous code we had invoking Slick had specified a custom selector, and now (as mentioned above) there were two extra wrapping divs in play, the Slick selector .children( _.options.slide + ':not(.slick-cloned)') was not matching anymore, as my targets for the slide are now inadvertent grandchildren! and after explicitly defining a selector that wasn't the default (> div), it no longer matched.
But why are there now two wrapping <div> elements around each slide where previously there were not?
This is the real question that needs answering, now that we understand why the selector wasn't working during the setup for the slides themselves.
By default, Slick has a default setting for the rows of a slideshow of 1 (unless overridden when invoking the plugin). There is internal code inside of Slick in a buildRows() function (called during initialisation of the carousel) that checks if the number of rows is > 0, and if so, it wraps the inner slides in these two divs!
Slick.prototype.buildRows = function() {
var _ = this, a, b, c, newSlides, numOfSlides, originalSlides,slidesPerSection;
newSlides = document.createDocumentFragment();
originalSlides = _.$slider.children();
if(_.options.rows > 0) {
slidesPerSection = _.options.slidesPerRow * _.options.rows;
numOfSlides = Math.ceil(
originalSlides.length / slidesPerSection
);
for(a = 0; a < numOfSlides; a++){
var slide = document.createElement('div');
for(b = 0; b < _.options.rows; b++) {
var row = document.createElement('div');
for(c = 0; c < _.options.slidesPerRow; c++) {
var target = (a * slidesPerSection + ((b * _.options.slidesPerRow) + c));
if (originalSlides.get(target)) {
row.appendChild(originalSlides.get(target));
}
}
slide.appendChild(row);
}
newSlides.appendChild(slide);
}
_.$slider.empty().append(newSlides);
_.$slider.children().children().children()
.css({
'width':(100 / _.options.slidesPerRow) + '%',
'display': 'inline-block'
});
}
};A quick check of setting rows to 0 in the carousel settings confirmed this was indeed the overall problem, and immediately, my carousels looked and behaved in the way they did before the update. The rows setting is designed for when putting Slick in a "grid mode" where you specify how many rows you want and also how many slides per row you want with the slidesPerRow parameter.
But why are most of the carousels now getting their slides wrapped in rows due to buildRows(), even if I haven't changed the rows parameter from its previous default of 1?
There is a slight confusion when looking at the documentation for the plugin, as it specifies the default value is 1, but also specifies "Setting this to more than 1 initializes grid mode. Use slidesPerRow to set how many slides should be in each row." But this is clearly not true, as we can see a commit that is present in the previous version we were using (1.8.1) had changed this check from if(_.options.rows > 1) to if(_.options.rows > 0) without having updated any of the documentation to say so.
... or is it? The final "gotcha" was that after comparing the minified JS provided in release 1.8.1 with the non-minified JS... the minified JS of 1.8.1 does indeed check if rows > 1, not rows > 0, but the non-minified code checks if rows > 0 - so they don't match, doh! 🤦
Minified code excerpt:
l.options.rows > 1) {Non-minified code excerpt:
if(_.options.rows > 0) {The master branch commit I'm running for the jQuery fixes correctly has the minified code matching the un-minified code, both checking rows > 0 - consistency, yey!
What a facepalm moment, eh?
After running either the updated Slick module code on a Drupal 11.x site that uses jQuery 4.x (or an un-minified 1.8.1 release on an older Drupal site running jQuery 3.x!) If you don't want your carousels to get the extra wrapping divs, which can cause serious selector issues (when you don't really need the 'grid mode' at all!) just pass rows: 0 as an option when initialising your Slick carousel along with the other options, and it'll behave as it did before.
e.g. (a super basic carousel options initialisation)
$('.some-carousel-selector').slick({
arrows: true,
dots: true,
slidesToShow: 3,
slidesToScroll: 3,
rows: 0, // <-- This is the key if you don't need the grid mode!
});If you've made it this far through the article, well done! Hopefully this article saves you from the same few hours of pain that I experienced!
In hindsight, the amount of time it took to work out exactly what was going on here and write this article up, I could have probably spent getting most of the carousels working with an entirely different plugin and pretty much most of the way there. What should have been a 15-minute job here turned into hours, but sometimes these unexpected things just happen when doing upgrades, especially when some of the code being used is from a different era (2017 wasn't that long ago, was it?) It's sometimes a tricky choice to know when to leave legacy code in place, try and make it work or when it's time to jump ship to another solution.
If it turned out that there were a multitude of Javascript errors with the Slick plugin under jQuery 4.x and no obvious solutions without knowing the inner workings of the plugin, then I probably would have changed tact and started re-implementing the carousels with another solution. But this wasn't the case here, and the issues turned out to be a lot more nuanced.
On the plus side, another project that also needs a Drupal 11 upgrade also uses Slick carousel from a long time ago, so it really should be a 15-minute job on that one with the knowledge gained here :)
I've been reading Drupal Core commits for more than 15 years. My workflow hasn't changed much over time. I subscribe to the Drupal Core commits RSS feed, and every morning, over coffee, I scan the new entries. For many of them, I click through to the issue on Drupal.org and read the summary and comments.
That workflow served me well for a long time. But when Drupal Starshot expanded my focus beyond Drupal Core to include Drupal CMS, Drupal Canvas, and the Drupal AI initiative, it became much harder to keep track of everything. All of this work happens in the open, but that doesn't make it easy to follow.
So I built a small tool I'm calling Drupal Digests. It watches the Drupal.org issue queues for Drupal Core, Drupal CMS, Drupal Canvas, and the Drupal AI initiative. When something noteworthy gets committed, it feeds the discussion and diff to AI, which writes me a summary: what changed, why it matters, and whether you need to do anything. You can see an example summary to get a feel for the format.
Each issue summary currently lives as its own Markdown file in a GitHub repository. Since I still like my morning coffee and RSS routine, I also generate RSS feeds that you can subscribe to in your favorite reader.
I built this to scratch my own itch, but realized it could help with something bigger. Staying informed is one of the hardest parts of contributing to a large Open Source project. These digests can help new contributors ramp up faster, help experienced module maintainers catch API changes, and make collaboration across the project easier.
I'm still tuning the prompts. Right now it costs me less than $2 a day in tokens, so I'm committed to running it for at least a year to see whether it's genuinely useful. If it proves valuable, I could imagine giving it a proper home, with search, filtering, and custom feeds.
For now, subscribe to a feed and tell me what you think.
read moreJoin Karen Horrocks and Stephen Musgrave as they introduce the upcoming non-profit summit at DrupalCon 2026 in Chicago. In this comprehensive fireside chat, they explore how AI can be integrated to serve a nonprofit's mission, plus the dos and don'ts of AI implementation. Hear insights from leading nonprofit professionals, learn about the variety of breakout sessions available, and discover the benefits of Kubernetes for maximizing ROI. Whether you're a developer, content editor, or a strategic planner, this session is crucial for understanding the future of nonprofit operations with cutting-edge technology.
For show notes visit: https://www.talkingDrupal.com/cafe015
TopicsStephen (he/him) is a co-founder, partner and Lead Technologist at Capellic, an agency that build and maintains websites for non-profits. Stephen is bullish on keeping things simple – not simplistic. His goal is to maximize the return on investment and minimize the overhead in maintaining the stack for the long term.
Stephen has been working with the web for over 30 years. He was initially drawn to the magic of using code to create web art, added in his love for relational databases, and has spent his career building websites with an unwavering commitment to structured content.
When Stephen isn't at his desk, he's often running to and swimming in Barton Springs Pool, getting a bit too wound-up at Austin FC games, and playing Legos with his little one.
Karen HorrocksKaren (she/her, karen11 on drupal.org and Drupal Slack) is a Web and Database Developer for the Physicians Committee for Responsible Medicine, a nonprofit dedicated to saving and improving human and animal lives through plant-based diets and ethical and effective scientific research.
Karen began her career as a government contractor at NASA Goddard Space Flight Center developing websites to distribute satellite data to the public. She moved to the nonprofit world when the Physicians Committee, an organization that she supports and follows, posted a job opening for a web developer. She has worked at the Physicians Committee for over 10 years creating websites that provide our members with the information and tools to move to a plant-based diet.
Karen is a co-moderator of NTEN's Nonprofit Drupal Community. She spoke on a panel at the 2019 Nonprofit Summit at DrupalCon Seattle and is helping to organize the 2026 Nonprofit Summit at DrupalCon Chicago.
ResourcesNonprofit Summit Agenda: https://events.drupal.org/chicago2026/session/summit-non-profit-guests-must-pre-register Register for the Summit (within the DrupalCon workflow): https://events.drupal.org/chicago2026/registration Funding Open Source for Digital Sovereignty: https://dri.es/funding-open-source-for-digital-sovereignty NTEN's Drupal Community of Practice Zoom call (1p ET on third Thursday of the month except August and December): https://www.nten.org/drupal/notes Nonprofit Drupal Slack Channel: #nonprofits on Drupal Slack
GuestsKaren Horrocks - karen11 www.pcrm.org Stephen Musgrave - capellic capellic.com
read moreIf you’ve been following the rapid rise of AI‑driven chatbots and ‘assistant‑as‑a‑service’ platforms, you know one of the biggest pain points is trustworthy, privacy‑preserving web search. AI assistants need access to current information to be useful, yet traditional search engines track every query, building detailed user profiles.
Enter SearXNG - an open‑source metasearch engine that aggregates results from dozens of public search back‑ends while never storing personal data. The new Drupal module lets any Drupal‑based AI assistant (ChatGPT, LLM‑powered bots, custom agents) invoke SearXNG directly from the Drupal site, bringing privacy‑first searching in‑process with your content.
SearXNG aggregates results from up to 247 search services without tracking or profiling users. Unlike Google, Bing or other mainstream search engines, SearXNG removes private data from search requests and doesn't forward anything from third-party services.
Think of it as a privacy-preserving intermediary: your query goes to SearXNG, which then queries multiple search engines on your behalf and aggregates the results, all while keeping your identity completely anonymous.
The Drupal SearXNG module brings this privacy-focused search capability directly into the Drupal ecosystem. It connects Drupal with your preferred SearXNG server (local or remote), includes a demonstration block, and provides an additional submodule that integrates SearXNG with Drupal AI by offering an AI Agent Tool.
This integration is particularly powerful when combined with Drupal's growing AI ecosystem, including the AI module framework, AI Agents and AI Assistants API.
The most compelling benefit is complete privacy protection. When your Drupal AI assistant uses SearXNG to search the web:
This makes it ideal for organisations in healthcare, government, education and any sector where data privacy is paramount.
By aggregating results from up to 247 search services, SearXNG provides more diverse and comprehensive search results than relying on a single search engine. Your AI assistant gets a broader perspective, potentially finding information that might be missed by individual search engines.
Organisations can run their own SearXNG instance, giving them complete control over:
Getting started is remarkably straightforward thanks to SearXNG's official Docker image, which makes launching a local server as simple as running a single command. This means organisations can have their own private search instance running in minutes, without complex server configuration or dependencies.
The module's AI Agent Tool integration means that Drupal AI assistants can seamlessly incorporate web search into their workflows. Whether it's a chatbot helping users navigate your site or an AI assistant helping content creators research topics, web search becomes just another capability in the assistant's toolkit.
Imagine a corporate intranet where employees use an AI assistant to find both internal documentation and external resources. The assistant can search your internal Drupal content while using SearXNG to find external information, all while maintaining complete privacy about what employees are researching.
Universities and schools increasingly need to protect student privacy. A Drupal-powered learning management system with an AI tutor can use SearXNG to help students research topics without creating profiles of their academic interests and struggles.
Government organisations can leverage AI assistants to help citizens find information and services. Using SearXNG ensures that citizen queries remain private and aren't used for commercial purposes.
The SearXNG Drupal module represents an important step forward in building AI systems that respect user privacy. As AI assistants become more prevalent in web applications, the ability to access current information without compromising privacy will become increasingly valuable.
Drupal's AI framework supports over 48 AI platforms, providing flexibility in choosing AI providers. By combining this with privacy-respecting search through SearXNG, organisations can build powerful, intelligent applications that align with growing privacy expectations and regulations.
Privacy and powerful AI don't have to be mutually exclusive. The SearXNG Drupal module proves that organisations can build intelligent, helpful AI assistants that respect user privacy. Whether you're building internal tools, public-facing applications, or specialised platforms, this module provides a foundation for privacy-first AI that can search the web without compromising user trust.
As data privacy regulations continue to evolve and users become more aware of digital privacy issues, tools like the SearXNG module will become increasingly essential. By adopting privacy-first approaches now, organisations can build user trust while delivering the intelligent, helpful experiences that modern web applications demand.
Find out more and download on the dedicated SearXNG Drupal project page.
If you’ve been following the rapid rise of AI‑driven chatbots and ‘assistant‑as‑a‑service’ platforms, you know one of the biggest pain points is trustworthy, privacy‑preserving web search. AI assistants need access to current information to be useful, yet traditional search engines track every query, building detailed user profiles.
Enter SearXNG - an open‑source metasearch engine that aggregates results from dozens of public search back‑ends while never storing personal data. The new Drupal module lets any Drupal‑based AI assistant (ChatGPT, LLM‑powered bots, custom agents) invoke SearXNG directly from the Drupal site, bringing privacy‑first searching in‑process with your content.
SearXNG aggregates results from up to 247 search services without tracking or profiling users. Unlike Google, Bing or other mainstream search engines, SearXNG removes private data from search requests and doesn't forward anything from third-party services.
Think of it as a privacy-preserving intermediary: your query goes to SearXNG, which then queries multiple search engines on your behalf and aggregates the results, all while keeping your identity completely anonymous.
The Drupal SearXNG module brings this privacy-focused search capability directly into the Drupal ecosystem. It connects Drupal with your preferred SearXNG server (local or remote), includes a demonstration block, and provides an additional submodule that integrates SearXNG with Drupal AI by offering an AI Agent Tool.
This integration is particularly powerful when combined with Drupal's growing AI ecosystem, including the AI module framework, AI Agents and AI Assistants API.
The most compelling benefit is complete privacy protection. When your Drupal AI assistant uses SearXNG to search the web:
This makes it ideal for organisations in healthcare, government, education and any sector where data privacy is paramount.
By aggregating results from up to 247 search services, SearXNG provides more diverse and comprehensive search results than relying on a single search engine. Your AI assistant gets a broader perspective, potentially finding information that might be missed by individual search engines.
Organisations can run their own SearXNG instance, giving them complete control over:
Getting started is remarkably straightforward thanks to SearXNG's official Docker image, which makes launching a local server as simple as running a single command. This means organisations can have their own private search instance running in minutes, without complex server configuration or dependencies.
The module's AI Agent Tool integration means that Drupal AI assistants can seamlessly incorporate web search into their workflows. Whether it's a chatbot helping users navigate your site or an AI assistant helping content creators research topics, web search becomes just another capability in the assistant's toolkit.
Imagine a corporate intranet where employees use an AI assistant to find both internal documentation and external resources. The assistant can search your internal Drupal content while using SearXNG to find external information, all while maintaining complete privacy about what employees are researching.
Universities and schools increasingly need to protect student privacy. A Drupal-powered learning management system with an AI tutor can use SearXNG to help students research topics without creating profiles of their academic interests and struggles.
Government organisations can leverage AI assistants to help citizens find information and services. Using SearXNG ensures that citizen queries remain private and aren't used for commercial purposes.
The SearXNG Drupal module represents an important step forward in building AI systems that respect user privacy. As AI assistants become more prevalent in web applications, the ability to access current information without compromising privacy will become increasingly valuable.
Drupal's AI framework supports over 48 AI platforms, providing flexibility in choosing AI providers. By combining this with privacy-respecting search through SearXNG, organisations can build powerful, intelligent applications that align with growing privacy expectations and regulations.
Privacy and powerful AI don't have to be mutually exclusive. The SearXNG Drupal module proves that organisations can build intelligent, helpful AI assistants that respect user privacy. Whether you're building internal tools, public-facing applications, or specialised platforms, this module provides a foundation for privacy-first AI that can search the web without compromising user trust.
As data privacy regulations continue to evolve and users become more aware of digital privacy issues, tools like the SearXNG module will become increasingly essential. By adopting privacy-first approaches now, organisations can build user trust while delivering the intelligent, helpful experiences that modern web applications demand.
Find out more and download on the dedicated SearXNG Drupal project page.
For years we have been talking about how Drupal got too expensive for the markets we used to serve. Regional clients, small and medium businesses in Latin America, Africa, Asia, anywhere where $100,000 websites are simply not a reality. We watched them go to WordPress. We watched them go to Wix. Not because Drupal was worse, but because the economics stopped working.
That conversation is changing.
Drupal CMS 2.0 landed in January 2026. And with it came a set of tools that, combined intelligently, make something possible that was not realistic before: an affordable, professional Drupal site delivered for $2,000, with margin, for markets that could not afford us before.
I want to show you the math. Not to sell you a fantasy, but because I did the exercise and the numbers work. And I am being conservative.
The real budget killer was always theming. Getting a site to look right, behave right, be maintainable, took serious senior hours. That is where budgets went.
Recipes pre-package common configurations so you are not starting from zero. Canvas lets clients and site builders assemble and manage pages visually once a developer sets up the component library.
Dripyard brings professional Drupal themes built specifically for Canvas (although works with Layout Builder, Paragraphs, etc), with excellent quality and accessibility, at around $500. While that seems expensive, the code quality, designs, and accessibility are top notch and will save at least 20 hours (and usually much more), which would easily eat up a small budget.
Three tools. One problem solved.
We proved the concept about a month ago with laollita.es, built in three days using Umami as a starting point. Umami as a version 0.5 of what a proper template should be. Drupal AI for translations, AI-assisted development for CSS and small components. Without formal templates. With proper ones, it gets faster.
Scope first. Most small business sites are simple: services, about us, blog, team, contact. The moment you add custom modules or complex requirements, the budget goes up. This blueprint is for projects that accept that constraint.
Start with Drupal CMS and a Dripyard theme. Recipes handle the configuration. Add AI assistance, a paid plan with a capable model, Claude runs between $15 and $50 depending on usage. Let it help you move faster, but supervise everything. The moment you stop reviewing AI decisions is the moment quality starts leaking.
For hosting, go with a Drupal CMS-specific provider like Drupito, Drupal Forge, or Flexsite, around $20 to $50 per month. Six months included for your client is $300. Those same $300 could go toward a site template from the marketplace launching at DrupalCon Chicago in March 2026, compressing your development time further.
With a constrained scope, the right tools, and AI under supervision, ten hours of net work is realistic. At LATAM-viable rates, $30 per hour on the high side, that is $300 in labor.
The cost breakdown: $500 theme, $300 hosting or template, $300 labor, $50 AI tools. Total: $1,150. Add a $300 buffer and you are at $1,450. Charge $2,000. Your profit is $550, a 27.5% margin.
And I am being conservative. As you build experience with the theme, develop your own component library, and refine your tooling, the numbers improve. The first project teaches you. The third one pays better.
Smaller budget, smaller scope. Start with Byte or Haven, two Drupal CMS site templates on Drupal.org, or generate an HTML template with AI for around $50. A site template from the upcoming marketplace will run around $300.
The math: $300 starting point, $150 for three months of hosting, $200 incidentals. Cost: $450. Charge $1,000. Margin: 35%.
A $1,000 project is a few pages, clear scope, no special requirements. Both you and the client have to be honest about that upfront.
When a client chooses Wix or WordPress to save money, they are choosing a ceiling. The day they need more, they are either rebuilding from scratch or paying for plugins and extras that someone still has to configure, maintain, and update every time the platform breaks something.
A client on Drupal CMS is on a platform that grows with them. The five-page site today can become a complex application tomorrow, on the same platform, without migrating. That is the conversation worth having. Not just what they get today, but what they will never have to undo.
The market in Latin America, Africa, Asia, and similar regions was always there. We just did not have the tools to serve it profitably. Now we do.
Drupal CMS, Canvas, Recipes, Dripyard, Drupal CMS-specific hosting, AI assistance with human oversight. The toolkit exists. Get back on trail.
DDEV v1.25.0 is here, and the community response has been strong. This month also brought three new training blog posts and a survey result that speaks for itself.
ddev share Provider System → Free Cloudflare Tunnel support, no login or token required. A modular provider system with hooks and CMS-specific configuration. Read more↗ddev utility mutagen-diagnose command. Read more↗ddev utility xdebug-diagnose command. Read more↗The 2026 CraftQuest Community Survey↗ collected responses from 253 Craft CMS developers and found DDEV at 72% market share for local development environments. The report notes: "This near-standardization simplifies onboarding for newcomers, reduces support burden for plugin developers, and means the ecosystem can optimize tooling around a single local dev workflow."
I'll be at Florida Drupalcamp this week, and will speak on how to use git worktree to run multiple versions of the same site. I'd love to see you and sit down and hear your experience with DDEV and ways you think it could be better.
Then in March I'll be at DrupalCon Chicago and as usual will do lots of Birds-of-a-Feather sessions about DDEV and related topics. Catch me in the hall, or let's sit down and have a coffee.
"I was today years old when I found out that DDEV exists. Now I am busy migrating all projects to Docker containers." — @themuellerman.bsky.social↗
"ddev is the reason I don't throw my laptop out of the window during local setup wars. one command to run the stack and forget the rest. simple as that." — @OMascatinho on X↗
Every major release brings some friction, and v1.25.0 is no exception. These will generally be solved in v1.25.1, which will be out soon. Here's what to watch for:
ddev start for users still on v1.24.10 who needed to rebuild containers. We pushed updated images for v1.24.10, so you can either ddev poweroff && ddev utility download-images or just go ahead and upgrade to v1.25.0, which shipped with the updated key. Details↗drush sql-cli and similar tools on MariaDB versions below 10.11. Workaround: add extra: "--skip-ssl" to your drush/drush.yml under command.sql.options, or upgrade your database to MariaDB 10.11+. Details↗.ddev/mysql/*.cnf doesn't work as expected. #8130↗ #8129↗*.ddev.site hostnames. Details↗~/.ddev/traefik/config — leftover v1.25.0 Traefik configuration breaks the older version. Details↗ddev start and ddev list, which looks alarming but is harmless. Details↗ddev npm and working_dir → ddev npm doesn't currently respect the working_dir web setting, a difference from v1.24.10. Details↗As always, please open an issue↗ if you run into trouble — it helps us fix things faster. You're the reason DDEV works so well!
Join us for upcoming training sessions for contributors and users.
February 26, 2026 at 10:00 US ET / 16:00 CET — Git bisect for fun and profit Add to Google Calendar • Download .ics
March 26, 2026 at 10:00 US ET / 15:00 CET — Using git worktree with DDEV projects and with DDEV itself
Add to Google Calendar •
Download .ics
April 23, 2026 at 10:00 US ET / 16:00 CEST — Creating, maintaining and testing add-ons 2026-updated version of our popular add-on training. Previous session recording↗ Add to Google Calendar • Download .ics
Zoom Info: Link: Join Zoom Meeting Passcode: 12345
After the community rallied in January, sponsorship has held steady and ticked up slightly. Thank you!
Previous status (January 2026): ~$8,208/month (68% of goal)
February 2026: ~$8,422/month (70% of goal)
If DDEV has helped your team, now is the time to give back. Whether you're an individual developer, an agency, or an organization — your contribution makes a difference. → Become a sponsor↗
Contact us to discuss sponsorship options that work for your organization.
Compiled and edited with assistance from Claude Code.
read moreAt Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using Artificial Intelligence and the insights and lessons they learn along the way. Here, team member Minnur Yunusov explores how AI-assisted coding helped him rapidly prototype the Document Summarizer Tooltip module for Drupal, while adding AI-generated document previews, improving accessibility, and refining code through real-time feedback.
I started with a simple goal: build a working prototype that could summarize linked documents directly in Drupal, without having to spend too much time on it. AI-assisted coding helped me move from idea to an installable module quickly, even though the first versions weren’t perfect. The focus was on getting something functional that I could iterate on, instead of hand-writing every piece from scratch.
The prototype I put together with AI-assisted coding works and can be installed and tested. You can find it on GitHub at https://github.com/minnur/docs_summarizer_tooltip.
Initially, I tried using Cline with Claude Sonnet to generate the module. It produced a full module structure, but the result didn’t actually work in Drupal. JavaScript in particular needed refactoring, so I switched over to Claude Code, which became my main tool for debugging and refining the implementation.
One of the biggest pain points was the tooltip behavior itself. The tooltip wasn’t positioning correctly, which meant the UX felt off and inconsistent. I used Claude Code iteratively to adjust the JavaScript until the tooltip appeared in the right place and behaved in a way that felt natural.
Another issue was that the tooltip wasn’t showing the title as expected. I tracked down the generated function responsible for rendering the header, wired in my own variables, and then asked Claude Code to include that variable in the header output. After that targeted change, the tooltip finally displayed the title properly and felt much closer to what I wanted.
The core concept of the module is straightforward: detect document links on a page and show an AI-generated summary in a tooltip on hover. It started life as a PDF-only prototype, focused on a single file type so I could validate the idea. Once I had the tooltip behavior working smoothly, with correct positioning, title rendering, and consistent UX, I was ready to expand the scope. I asked Claude Code to refactor the module to support more file types beyond PDFs and rename it to “Document Summarizer Tooltip.”
The refactor mostly worked, but the rename was incomplete. Some files kept the old name and needed manual updates. This was a good reminder that while AI can handle broad changes efficiently, it still needs a human to double-check details across Drupal files and configuration.
Once the basic behavior was there, I wanted to think about accessibility. A tooltip full of AI-generated content is not very helpful if screen readers or keyboard users can’t access it. I asked the AI to help with adding accessibility considerations as a next step, including ARIA attributes and behavior that would work beyond simple mouse hover.
The initial AI-generated settings form went a bit overboard and included more fields than I actually needed. That said, it did a good job of covering a lot of reasonable options. From there, I was able to prune back the form to something simpler and more focused, which also made the UI easier to understand and configure.
One thing that stood out to me was how well the AI handled some of the integration details. It added Drupal AI integration and CSRF token support with almost no issues, which saved a lot of time. It also recognized variables I introduced and reused them correctly across functions, which made iterations smoother.
At the same time, the generated code was not something I could just drop in without reading. A few Drupal API calls looked right on the surface but weren’t actually real. That required a thorough review and manual fixes. I didn’t have time to add unit tests for this prototype, but in the future I’d like to see how well AI can help suggest or scaffold tests alongside code changes.
There are a few clear ways clients could apply this approach. First, AI-assisted coding is very effective for rapid prototyping, especially when you need to validate a module concept before committing a lot of engineering time. Second, using AI to help with accessibility improvements in templates can speed up the process of making interfaces more inclusive.
Finally, I see a lot of potential in using tools like Claude Code to support test creation and maintenance. While I didn’t get to that stage on this project, generating tests, fixing contributed modules, and experimenting with code improvements all look like strong fits for this kind of workflow. The Document Summarizer Tooltip itself could also be directly useful on content-heavy sites that want instant, inline document previews.
If you’d like to explore the code or try the module yourself, the prototype is available on GitHub at https://github.com/minnur/docs_summarizer_tooltip.
This post is part of Tag1’s This post is part of our AI Applied content series content series, where we share how we're using AI inside our own work before bringing it to clients. Our goal is to be transparent about what works, what doesn’t, and what we are still figuring out, so that together, we can build a more practical, responsible path for AI adoption.
Bring practical, proven AI adoption strategies to your organization, let's start a conversation! We'd love to hear from you.
read moreJoin us THURSDAY, February 19 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)
We don't have anything specific on the agenda this month, so we'll have plenty of time to discuss anything that's on our minds at the intersection of Drupal and nonprofits. Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google document at https://nten.org/drupal/notes!
All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.
This free call is sponsored by NTEN.org and open to everyone.
Information on joining the meeting can be found in our collaborative Google document.
Join us THURSDAY, February 19 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)
We don't have anything specific on the agenda this month, so we'll have plenty of time to discuss anything that's on our minds at the intersection of Drupal and nonprofits. Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google document at https://nten.org/drupal/notes!
All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.
This free call is sponsored by NTEN.org and open to everyone.
Information on joining the meeting can be found in our collaborative Google document.
You know the drill. You visit the Drupal Status Report to check if anything needs attention, and you're greeted by a wall of warnings you've seen dozens of times before.
Some warnings are important. Others? Not so much. Maybe you're tracking an update notification in your Gitlab and don't need the constant reminder. Perhaps there's a PHP deprecation notice you're already aware of and planning to address during your next scheduled upgrade. Or you're seeing environment-specific warnings that simply don't apply to your infrastructure setup.
The problem is that all these warnings sit alongside genuine issues that actually need your attention. The noise drowns out the signal. You end up scrolling past the same irrelevant messages every time, increasing the chance you'll miss something that matters.
Over time, you develop warning blindness. Your brain learns to ignore the status report page entirely because the signal-to-noise ratio is too low. Then, when a genuine security update appears or a database schema issue emerges, it gets lost in the familiar sea of orange and red.
This problem multiplies across teams. Each developer independently decides which warnings to ignore. New team members have no way to know which warnings matter and which ones are environmental noise. The status report becomes unreliable, defeating its entire purpose.
… read moreFor most people, Xdebug step debugging in DDEV just works: ddev xdebug on, set a breakpoint, start your IDE's debug listener, and go. DDEV handles all the Docker networking automatically. If you're having trouble, run ddev utility xdebug-diagnose and ddev utility xdebug-diagnose --interactive — they check your configuration and connectivity and tells you exactly what to fix.
This post explains how the pieces fit together and what to do if things do go wrong.
ddev xdebug onindex.php or web/index.php)If it doesn't work:
ddev utility xdebug-diagnose
Or for guided, step-by-step troubleshooting:
ddev utility xdebug-diagnose --interactive
The diagnostic checks port 9003 listener status, host.docker.internal resolution, WSL2 configuration, xdebug_ide_location, network connectivity, and whether Xdebug is loaded. It gives actionable fix recommendations.
Xdebug lets you set breakpoints, step through code, and inspect variables — interactive debugging instead of var_dump().
The connection model is a reverse connection: your IDE listens on port 9003 (it's the TCP server), and PHP with Xdebug initiates the connection (it's the TCP client). Your IDE must be listening before PHP tries to connect.
:::note The Xdebug documentation uses the opposite terminology, calling the IDE the "client." We use standard TCP terminology here. :::
DDEV configures Xdebug to connect to host.docker.internal:9003. This special hostname resolves to the host machine's IP address from inside the container, so PHP can reach your IDE across the Docker boundary.
The tricky part is that host.docker.internal works differently across platforms. DDEV handles this automatically:
host.docker.internal nativelyYou can verify the resolution with:
ddev exec getent hosts host.docker.internal
ddev xdebug on / off / toggle — Enable, disable, or toggle Xdebugddev xdebug status — Check if Xdebug is enabledddev xdebug info — Show configuration and connection detailsZero-configuration debugging works out of the box:
PhpStorm auto-detects the server and path mappings. If mappings are wrong, check Settings → PHP → Servers and verify /var/www/html maps to your project root.
The PhpStorm DDEV Integration plugin handles this automatically.
Install the PHP Debug extension and create .vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003,
"hostname": "0.0.0.0",
"pathMappings": {
"/var/www/html": "${workspaceFolder}"
}
}
]
}
The VS Code DDEV Manager extension can set this up for you.
WSL2 + VS Code with WSL extension: Install the PHP Debug extension in WSL, not Windows.
Most problems fall into a few categories. The ddev utility xdebug-diagnose tool checks for all of these automatically.
Breakpoint in code that doesn't execute: The #1 issue. Start with a breakpoint in your entry point (index.php) to confirm Xdebug works, then move to the code you actually want to debug.
IDE not listening: Make sure you've started the debug listener. PhpStorm: click the phone icon. VS Code: press F5.
Incorrect path mappings: Xdebug reports container paths (/var/www/html), and your IDE needs to map them to your local project. PhpStorm usually auto-detects this; VS Code needs the pathMappings in launch.json.
Firewall blocking the connection: Especially common on WSL2, where Windows Defender Firewall blocks connections from the Docker container. Quick test: temporarily disable your firewall. If debugging works, add a firewall rule for port 9003.
WSL2 adds networking complexity. The most common problems:
Windows Defender Firewall blocks connections from WSL2 to Windows. Temporarily disable it to test; if debugging works, add a rule for port 9003.
Mirrored mode requires hostAddressLoopback=true in C:\Users\<username>\.wslconfig:
[experimental]
hostAddressLoopback=true
Then wsl --shutdown to apply.
IDE in WSL2 (VS Code + WSL extension): Set ddev config global --xdebug-ide-location=wsl2
Container-based IDEs (VS Code Remote Containers, JetBrains Gateway):
ddev config global --xdebug-ide-location=container
Command-line debugging: Works the same way — ddev xdebug on, start your IDE listener, then ddev exec php myscript.php. Works for Drush, WP-CLI, Artisan, and any PHP executed in the container.
Debugging Composer: Composer disables Xdebug by default. Override with:
ddev exec COMPOSER_ALLOW_XDEBUG=1 composer install
Custom port: Create .ddev/php/xdebug_client_port.ini with xdebug.client_port=9000 (rarely needed).
Debugging host.docker.internal resolution: Run DDEV_DEBUG=true ddev start to see how DDEV determines the IP.
xdebugctl: DDEV includes the xdebugctl utility for dynamically querying and modifying Xdebug settings, switching modes (debug, profile, trace), and more. Run ddev exec xdebugctl --help. See the xdebugctl documentation.
Xdebug map feature: Recent Xdebug versions can remap file paths during debugging, useful when container paths don't match local paths in complex ways. This complements IDE path mappings.
Performance: Xdebug adds overhead. Use ddev xdebug off or ddev xdebug toggle when you're not actively debugging.
Claude Code was used to create an initial draft for this blog, and for subsequent reviews.
read moreIf you are a marketing or content leader, DrupalCon Chicago 2026 is already calling your name. You are the special audience whose creative spark and unique perspective shine a light on Drupal in ways developers alone never could. You promote Drupal’s capabilities to the world and ensure the platform reaches the users who need it. You translate technical innovation into stories that resonate with everyone.
Drupal is increasingly built with you in mind. Making Drupal more editor‑friendly has been a clear priority in recent years. Thanks to your feedback and insights, great strides have been made in providing tools and workflows that truly support your creative vision.
This year’s DrupalCon sessions are set to spark bold insights, fresh strategies, and lively discussions. Expect those unforgettable “aha!” moments you’ll want to carry back and weave into your own marketing and content playbook. Here is a curated list of standout sessions designed to help marketing and content leaders turn inspiration into action, build meaningful connections, and discover new ways to make the most out of Drupal’s strengths.
Search Engine Optimization (SEO) has long been one of the web’s most familiar acronyms when it comes to boosting content visibility. But new times bring new terms, and it’s time to meet “GEO” (Generative Engine Optimization).
Indeed, traditional SEO alone is no longer enough in a world where tools like ChatGPT, Perplexity, and Google’s AI Overviews are everyday sources of advice. Today, SEO and GEO must work hand in hand. DrupalCon Chicago 2026 has an insightful session designed to introduce you to a new way of helping your content reach its audience in the age of AI-driven recommendations.
Join brilliant speakers, Jeffrey McGuire (horncologne) and Tracy Evans (kanadiankicks), to stay ahead of the curve. Jeffrey A. “jam” McGuire has been one of the most influential voices in the Drupal community for over two decades, recognized as a marketing strategy and communications expert. With their combined expertise, this session is tailored for marketing and content leaders who want practical, actionable guidance.
You’ll explore how to make your agency, SaaS product, or company stand out when large language models decide which names to surface. Practical strategies will follow, helping you position your expertise, strengthen credibility signals, and align your content with the data sources LLMs rely on. The session will draw from real-world research, client projects, and observations.
It shouldn’t come as a surprise that the next session on this list is also about AI. Of course, you already know that artificial intelligence can churn out content in seconds. But how to make sure it’s consistent with your brand’s voice, feels authentic for your organization, and resonates with your audience?
That’s where Drupal’s latest innovations, Context Control Center and Drupal Canvas, step in. Expect more exciting details at this session at DrupalCon Chicago 2026, which is a must‑see for marketing and content leaders.
This talk will be led by Kristen Pol (kristen pol) and Aidan Foster (afoster), the maintainers behind Context Control Center and Drupal Canvas. Through live demos, you’ll see landing pages, service pages, and blog posts come to life with clear context rules.
You’ll also leave with a practical starter framework for building your own context files, giving you the confidence to guide AI toward content that supports your marketing goals and strengthens your brand presence.
Content chaos is something every marketing and content leader has faced: fragmented messaging, inconsistent standards, and editorial bottlenecks that slow campaigns down. At DrupalCon Chicago 2026, you’ll discover an actionable plan to make your content consistent, organized, and aligned with your brand’s goals.
Join this compelling session by Richard Nosek and C.J. Pagtakhan, seasoned experts in digital strategy. They’ll show how structured governance can scale across departments without stifling creativity. Explore workflows that make life easier for authors, editors, and administrators, including approval processes, audits, and lifecycle management. Discover clear frameworks for roles, responsibilities, and standards.
And because theory is best paired with practice, you’ll see real-world examples of how this approach improves quality, strengthens collaboration, and supports long‑term digital strategy on Drupal websites of every size and scope.
Within agencies, sales and delivery departments share the same ultimate goal, client success. However, sales teams chase ambitious targets, while delivery teams focus on scope, sustainability, and the realities of open‑source implementation. Too often, this push and pull leads to friction, misaligned expectations, and even dips in client satisfaction.
At DrupalCon Chicago 2026, Hannah O’Leary hannaholeary and Hannah McDermott (hannah mcdermott) will share how they turned that challenge into a partnership at the Zoocha team. Through transparent handovers, joint scoping, and shared KPIs, they built a framework where both sides thrive together.
This session will highlight how open communication improved forecasting, reduced “us vs. them” dynamics, and directly boosted the quality of Drupal delivery. You’ll leave with practical strategies to apply in your own organization. This includes fostering empathy across teams, aligning metrics, and creating a culture of transparency.
Imagine logging in and instantly seeing what matters most to your content team: recent edits, accessibility checks, broken links, permissions, and so on. That’s the power of a dashboard built not just to look good, but to truly support editors in their daily work.
Join Albert Hughes (ahughes3) and Dave Hansen-Lange (dalin) at their session as they share the journey of shaping a dashboard for 500 editors across 130 sites. You’ll hear how priorities were set, how editor needs were balanced with technical realities, and how decisions shaped a tool that keeps content teams focused and confident.
You’ll walk away with practical lessons you can apply to your own platform and a fresh perspective on how smart dashboards can empower editors and strengthen content leadership.
As marketing and content leaders, you will appreciate a session on Drupal’s latest innovations that can make a difference in your work. One of the greatest presentations for this purpose at DrupalCon Chicago 2026 is the Drupal CMS Spotlights.
Drupal CMS is a curated version of Drupal packed with pre-configured features, many of which are focused on content experiences. For example, you can instantly spin up a ready-to-go blog, SEO tools, events, and more.
The session brings together key Drupal CMS leaders to share insights on recent developments and plans for the future. You’ll hear about Site Templates, the new Drupal Canvas page builder, AI, user experience, usability, documentation, and more.
Gábor Hojtsy (gábor hojtsy), Drupal core committer and initiative coordinator, is known for his engaging style, so you’ll enjoy the session even if some details get technical.
For marketing and content leaders, the launch of the Drupal Site Template Marketplace is big news. Each template combines recipes (pre‑configured feature sets), demo content, and a Canvas‑compatible theme, making it faster than ever to launch a professional, polished website. For anyone focused on storytelling, campaigns, or digital experiences, this is a game‑changer.
The pilot program at DrupalCon Vienna 2025 introduced the first templates, built with the support of Drupal Certified Partners. Now, the Marketplace is expanding, offering a streamlined way to discover, select, and implement templates that align with your goals.
Join Tim Hestenes Lehnen (hestenet), a renowned Drupal core contributor, for a session that dives deeper. He’ll share lessons learned from the pilot, explain how the Marketplace connects to the roadmap for Drupal CMS and Drupal Canvas, and explore what’s next as more templates become available.
The inspiring keynote by Dries Buytaert, Drupal’s founder, is a session that can’t be missed. Driesnote closes the opening program at Chicago 2026 and sets the tone for the entire conference. It’s your perfect chance to see where Drupal is headed, and how those changes make your work easier, faster, and more creative.
At DrupalCon Vienna 2025, the main auditorium’s audience was the first to hear Dries’ announcements. Among other things, they heard about the rise in the AI Initiative funding, doubled contributions into Drupal CMS, and site templates to be found at Marketplace.
Marketers and content editors were especially amazed to see what’s becoming possible in their work: content templates in Drupal Canvas, a Context Control Center to help AI capture brand voice, and autonomous Drupal agents keeping content up to date automatically.
This year, the mystery of what’s next is yours to uncover. Follow the crowd to the main auditorium at DrupalCon Chicago and expect that signature “wow” moment that leaves the audience buzzing.
Step into DrupalCon Chicago 2026 and reignite your marketing and content vision. Connect with peers, recharge your ideas, and see how Drupal continues to evolve. The sessions are designed to spark creativity and provide tools that can be put to work right away. As you head into the event, keep an open mind, lean into the conversations, and enjoy the energy that comes from sharing ideas across our amazing community.
Authored By: Nadiia Nykolaichuk, DrupalCon Chicago 2026 Marketing & Outreach Committee Member
Today we are talking about Acquia's Fully managed Drupal SaaS Acquia Source, What you can do with it, and how it could change your organization with guest Matthew Grasmick. We'll also cover AI Single Page Importer as our module of the week.
For show notes visit: https://www.talkingDrupal.com/540
TopicsMatthew Grasmick - grasmash
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Catherine Tsiboukas - mindcraftgroup.com bletch
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
While Artificial Intelligence is evolving rapidly, many applications remain experimental and difficult to implement in professional production environments. The Drupal AI Initiative addresses this directly, driving responsible AI innovation by channelling the community's creative energy into a clear, coordinated product vision for Drupal.
In this article, the third in a series, we highlight the outcomes of the latest development sprints of the Drupal AI Initiative. Part one outlines the 2026 roadmap presented by Dries Buytaert. Part two addresses the organisation and new working model for the delivery of AI functionality.
Authors: Arian, Christoph, Piyuesh, Rakhi (alphabetical)
Dries Buytaert presenting the status of Drupal AI Initiative at DrupalCon Vienna 2025
To turn the potential of AI into a reliable reality for the Drupal ecosystem, we have developed a repeatable, high-velocity production model that has already delivered significant results in its first four weeks.
To maximize efficiency and scale, development is organized into two closely collaborating workstreams. Together, they form a clear pipeline from exploration and prototyping to stable functionality:
This structure is powered by a Request for Proposal (RFP) model, sponsored by 28 organizations partnering with the Drupal AI Initiative.
The management of these workstreams is designed to rotate every six months via a new RFP process. Currently, 1xINTERNET provides the Product Owner for Product Development and QED42 provides the Product Owner for Innovation, while FreelyGive provides core technical architecture. This model ensures the initiative remains sustainable and neutral, while benefiting from the consistent professional expertise provided by the partners of the Drupal AI Initiative.
The professional delivery of the initiative is driven by our AI Partners, who provide the specialized resources required for implementation. To maintain high development velocity, we operate in two-week sprint iterations. This predictable cadence allows our partners to effectively plan their staff allocations and ensures consistent momentum.
The Product Owners for each workstream work closely with the AI Initiative Leadership to deliver on the one-year roadmap. They maintain well-prepared backlogs, ensuring that participating organizations can contribute where their specific technical strengths are most impactful.
By managing the complete development lifecycle, including software engineering, UX design, quality assurance, and peer reviews, the sprint teams ensure the delivery of stable and well-architected solutions that are ready for production environments.
The work of the AI Initiative provides important functionality to the recently launched Drupal CMS 2.0. This release represents one of the most significant evolutions in Drupal’s 25-year history, introducing Drupal Canvas and a suite of AI-powered tools within a visual-first platform designed for marketing teams and site builders alike.
The strategic cooperation between the Drupal AI Initiative and the Drupal CMS team ensures that our professional-grade AI framework delivers critical functionality while aligning with the goals of Drupal CMS.
The initial sprints demonstrate the high productivity of this dual-workstream approach, driven directly by the specialized staff of our partnering organizations. In the first two weeks, the sprint teams resolved 143 issues, creating significant momentum right from the first sprint.
Screenshot Drupal AI Dashboard
This surge of activity resulted in the largest regular patch release in the history of the Drupal AI module. This achievement was made possible by the intensive collaboration between several expert companies working in sync. Increased contribution from our partners will allow us to further accelerate development velocity, improving the capacity to deliver more advanced technical features in the coming months.
Screen recording Agents Debugger
While the volume of work is significant, some new features stand out. Here are a few highlights from our recent sprint reviews:
Our success so far is thanks to the companies who have stepped up as Drupal AI Partners. These organizations are leading the way in defining how AI and the Open Web intersect.
A huge thank you to our main contributors of the first two sprints (alphabetical order):
We invite further participation from the community. If your organization is interested in contributing expert resources to the forefront of AI development, we encourage you to join the initiative.
Authors: Arian, Christoph, Piyuesh, Rakhi (alphabetical)
While Artificial Intelligence is evolving rapidly, many applications remain experimental and difficult to implement in professional production environments. The Drupal AI Initiative addresses this directly, driving responsible AI innovation by channelling the community's creative energy into a clear, coordinated product vision for Drupal.
Dries Buytaert presenting the status of Drupal AI Initiative at DrupalCon Vienna 2025
In this article, the third in a series, we highlight the outcomes of the latest development sprints of the Drupal AI Initiative. Part one outlines the 2026 roadmap presented by Dries Buytaert. Part two addresses the organisation and new working model for the delivery of AI functionality.
To turn the potential of AI into a reliable reality for the Drupal ecosystem, we have developed a repeatable, high-velocity production model that has already delivered significant results in its first four weeks.
To maximize efficiency and scale, development is organized into two closely collaborating workstreams. Together, they form a clear pipeline from exploration and prototyping to stable functionality:
This structure is powered by a Request for Proposal (RFP) model, sponsored by 28 organizations partnering with the Drupal AI Initiative.
The management of these workstreams is designed to rotate every six months via a new RFP process. Currently, 1xINTERNET provides the Product Owner for Product Development and QED42 provides the Product Owner for Innovation, while FreelyGive provides core technical architecture. This model ensures the initiative remains sustainable and neutral, while benefiting from the consistent professional expertise provided by the partners of the Drupal AI Initiative.
The professional delivery of the initiative is driven by our AI Partners, who provide the specialized resources required for implementation. To maintain high development velocity, we operate in two-week sprint iterations. This predictable cadence allows our partners to effectively plan their staff allocations and ensures consistent momentum.
The Product Owners for each workstream work closely with the AI Initiative Leadership to deliver on the one-year roadmap. They maintain well-prepared backlogs, ensuring that participating organizations can contribute where their specific technical strengths are most impactful.
By managing the complete development lifecycle, including software engineering, UX design, quality assurance, and peer reviews, the sprint teams ensure the delivery of stable and well-architected solutions that are ready for production environments.
The work of the AI Initiative provides important functionality to the recently launched Drupal CMS 2.0. This release represents one of the most significant evolutions in Drupal’s 25-year history, introducing Drupal Canvas and a suite of AI-powered tools within a visual-first platform designed for marketing teams and site builders alike.
The strategic cooperation between the Drupal AI Initiative and the Drupal CMS team ensures that our professional-grade AI framework delivers critical functionality while aligning with the goals of Drupal CMS.
Results from our first Sprints
The initial sprints demonstrate the high productivity of this dual-workstream approach, driven directly by the specialized staff of our partnering organizations. In the first two weeks, the sprint teams resolved 143 issues, creating significant momentum right from the first sprint.
Screenshot Drupal AI Dashboard
This surge of activity resulted in the largest regular patch release in the history of the Drupal AI module. This achievement was made possible by the intensive collaboration between several expert companies working in sync. Increased contribution from our partners will allow us to further accelerate development velocity, improving the capacity to deliver more advanced technical features in the coming months.
Screen recording Agents Debugger
While the volume of work is significant, some new features stand out. Here are a few highlights from our recent sprint reviews:
Our success so far is thanks to the companies who have stepped up as Drupal AI Partners. These organizations are leading the way in defining how AI and the Open Web intersect.
A huge thank you to our main contributors of the first two sprints (alphabetical order):
We invite further participation from the community. If your organization is interested in contributing expert resources to the forefront of AI development, we encourage you to join the initiative.
#Drupal, Joomla, Magento, Mautic. All PHP-based, all use Composer, all have talented & passionate communities. And all share the same problems around growth and sustainability. There is a solution.
No, we should not merge the codebases. Sure, you could have AI "Ralph-Wiggum" its way to a monstrosity with passing tests. But these frameworks are trusted for their code quality and security, and using AI to Frankenstein-smush them together would destroy that trust instantly.
What I'm proposing is merging the communities behind a single framework.
Why now? Because (yes, I'm going there) while AI can't merge codebases, it can help developers who already know PHP, Composer, and open source ramp up on a new framework far faster than before. The barrier to a knowledgable human using a different technology has never been lower.
In the Drupal lifecycle, investment is rarely about the "new." It is about the enduring. While the broader tech landscape often chases the friction of constant disruption, what keeps this community anchored in 2026 is a different kind of momentum: the trust built through predictable engineering and shared governance.
We are currently seeing the dividends of that discipline. Drupal 11 continues to validate the shortened release cadence introduced with Drupal 8. By turning major version upgrades from "all-hands" crises into managed, architectural transitions, the community has removed the penalty for staying current. This isn't just maintenance; it is the infrastructure of reliability that allows enterprises and public institutions to stay invested without fear of the next breaking change.
The defining signal of this issue, however, is the formal commitment of 28 organisations to the Drupal AI roadmap. This goes beyond a technical milestone. In an era where "AI" is often synonymous with proprietary black boxes and reckless speed, the Drupal community is choosing a path of collective sovereignty. When organisations publicly back a roadmap, they signal shared governance, coordinated delivery, and sustained resource allocation. This pledge represents a move to a coordinated delivery workstream with institutional accountability.
The significance of that pledge lies in how AI is being framed. Drupal’s long-standing strengths—structured content, multilingual architecture, revision control, granular permissions, and workflow governance—remain foundational. AI capabilities are positioned as assistive and integrative, operating within those systems rather than bypassing them. The intention is augmentation, not disruption.
Investment in Drupal, then, is less about trend adoption and more about stewardship. It is visible in coordinated roadmaps, predictable release discipline, community-backed delivery structures, and organisations willing to commit resources publicly. Relevance, in this context, is not declared. It is maintained.
Thank you,
Kazima Abbas
Sub-editor,
The DropTimes
The UK's largest Drupal event arrives at the University of Salford, Manchester on 28th February – 1st March — and there's something for everyone, whether you're a seasoned Drupal professional or simply curious about what open source technology can do for your organisation.
read moreNow that some of the projects that opted-in for GitLab issues are using them, they are getting real world experience with how the issue workflow in GitLab is slightly different. More and more projects are being migrated each week so sooner or later you will probably run into the following situations.
When creating issues, the form is very simple. Add a title and a description and save, that's it!
GitLab has different work items when working on projects, like "Incidents", "Tasks" and "Issues". Our matching type will always be "Issue". Maintainers might choose to use the other types, but all integrations with Drupal.org will be made against "Issue" items.
As mentioned in the previous blog post GitLab issue migration: the new workflow for migrated projects, all the metadata for issues is managed via labels. Maintainers will select the labels once the issue is created.
Users without sufficient privileges cannot decide things like priority or tags to use. Maintainers can decide to grant the role "reporter" to some users to help with this metadata for the issues. Reporters will be able to add/edit metadata when adding or editing issues. We acknowledge that this is probably the biggest difference to working with Drupal.org issues. We are listening to feedback and trying to identify the real needs first (thanks to the projects that opted in), before implementing anything permanent.
Reporters will be able to add or edit labels on issue creation or edit:
So far, we have identified the biggest missing piece, the ability to mark an issue as RTBC. Bouncing between "Needs work" or "Needs review" tends to happen organically via comments among the participating contributors in the issue, but RTBC is probably what some maintainers look for to get an issue merged.
The previous are conventions that we agreed on as a community a while back. RTBC is one, NW (Needs Work) vs NR (Needs Review) is another one, so we could use this transition to GitLab issues to define the equivalent ones.
GitLab merge requests offer several choices that we could easily leverage.
We encourage maintainers to look at the merge requests listing instead (like this one). Both "draft" vs. "ready" and "approved" are features you can filter by when viewing merge requests for a project.
There are automated messages when opening or closing issues that provide links related to fork management, fork information, and access request when creating forks, and reminders to update the contribution record links to the issue to track credit information.
When referring to a Drupal.org issue from another Drupal.org issue, you can continue to use the [#123] syntax in the summary and comments, but enter the full URL in the "related issues" entry box.
When referring to a GitLab issue from another GitLab issue, use the #123 syntax, without the enclosing [ ].
For cross-platform references (Drupal to GitLab or GitLab to Drupal), you need to use the full URL.
Same as before, we want to go and review more of the already opted-in projects, collect feedback, act on it when needed, and then we will start to batch-migrate the next set: low-usage projects, projects with a low number of issues, etc.
The above should get us 80% of the way regarding the total number of projects to migrate, and once we have gathered more feedback and iterated over it, we'll be ready to target higher-volume, higher-usage projects.
Related blog posts:
Now that some of the projects that opted-in for GitLab issues are using them, they are getting real world experience with how the issue workflow in GitLab is slightly different. More and more projects are being migrated each week so sooner or later you will probably run into the following situations.
When creating issues, the form is very simple. Add a title and a description and save, that's it!
GitLab has different work items when working on projects, like "Incidents", "Tasks" and "Issues". Our matching type will always be "Issue". Maintainers might choose to use the other types, but all integrations with Drupal.org will be made against "Issue" items.
As mentioned in the previous blog post GitLab issue migration: the new workflow for migrated projects, all the metadata for issues is managed via labels. Maintainers will select the labels once the issue is created.
Users without sufficient privileges cannot decide things like priority or tags to use. Maintainers can decide to grant the role "reporter" to some users to help with this metadata for the issues. Reporters will be able to add/edit metadata when adding or editing issues. We acknowledge that this is probably the biggest difference to working with Drupal.org issues. We are listening to feedback and trying to identify the real needs first (thanks to the projects that opted in), before implementing anything permanent.
Reporters will be able to add or edit labels on issue creation or edit:
So far, we have identified the biggest missing piece, the ability to mark an issue as RTBC. Bouncing between "Needs work" or "Needs review" tends to happen organically via comments among the participating contributors in the issue, but RTBC is probably what some maintainers look for to get an issue merged.
The previous are conventions that we agreed on as a community a while back. RTBC is one, NW (Needs Work) vs NR (Needs Review) is another one, so we could use this transition to GitLab issues to define the equivalent ones.
GitLab merge requests offer several choices that we could easily leverage.
We encourage maintainers to look at the merge requests listing instead (like this one). Both "draft" vs. "ready" and "approved" are features you can filter by when viewing merge requests for a project.
There are automated messages when opening or closing issues that provide links related to fork management, fork information, and access request when creating forks, and reminders to update the contribution record links to the issue to track credit information.
When referring to a Drupal.org issue from another Drupal.org issue, you can continue to use the [#123] syntax in the summary and comments, but enter the full URL in the "related issues" entry box.
When referring to a GitLab issue from another GitLab issue, use the #123 syntax, without the enclosing [ ].
For cross-platform references (Drupal to GitLab or GitLab to Drupal), you need to use the full URL.
Same as before, we want to go and review more of the already opted-in projects, collect feedback, act on it when needed, and then we will start to batch-migrate the next set: low-usage projects, projects with a low number of issues, etc.
The above should get us 80% of the way regarding the total number of projects to migrate, and once we have gathered more feedback and iterated over it, we'll be ready to target higher-volume, higher-usage projects.
Related blog posts:
Mutagen has been a part of DDEV for years, providing dramatic performance improvements for macOS and traditional Windows users. It's enabled by default on these platforms, but understanding how it works, what can go wrong, and how to debug issues is key to getting the most out of DDEV.
If you're here because you just need to debug a Mutagen problem, this will probably help:
ddev utility mutagen-diagnose
See more below.
This blog is based on the Mutagen Fundamentals and Troubleshooting Contributor Training held on January 22, 2026.
See the slides for the training video.
Mutagen is an asynchronous file synchronization tool that decouples in-container reads and writes from reads and writes on the host machine. Each filesystem enjoys near-native speed because neither is stuck waiting on the other.
Traditional Docker bind-mounts check every file access against the file on the host. On macOS and Windows, Docker's implementation of these checks is not performant. Mutagen solves this by maintaining a cached copy of your project files in a Docker volume, syncing changes between host and container asynchronously.
The primary target of Mutagen syncing is PHP files. These were the fundamental problem with Docker as the number of files in a Docker-hosted PHP website grew into the Composer generation with tens of thousands of files, so php-fpm had to open so very many of them all at once. Now with DDEV on macOS using Mutagen, php-fpm is opening files that are just on its local Linux filesystem, not opening ten thousand files that all have to be verified on the host.
Mutagen has delighted many developers with its web-serving performance. One dev said "the first time I tried it I cried."
Mutagen supports filesystem notifications (inotify/fsnotify), so file-watchers on both the host and inside the container are notified when changes occur. This is a significant advantage for development tools that would otherwise need to poll for changes.
When Mutagen is enabled, DDEV:
/var/www inside the web containerddev start: Starts the Mutagen daemon on the host if not running, creates or resumes sync sessionddev stop: Flushes sync session to ensure consistency, then pauses itddev composer: Triggers synchronous flush after completion to sync massive filesystem changesddev mutagen reset: Removes the Docker volume and the sync session will then be recreated from scratch (from the host-side contents) on ddev start.DDEV automatically excludes user-generated files in upload_dirs from Mutagen syncing, using bind-mounts instead. For most CMS types, this is configured automatically, for example:
sites/default/fileswp-content/uploadsfileadmin, uploadsIf your project has non-standard locations, override defaults by setting upload_dirs in .ddev/config.yaml.
We do note that upload_dirs is not an adequate description for this behavior. It was originally intended for user-generated files, but now is used for heavy directories like node_modules, etc.
The first-time Mutagen sync takes 5-60 seconds depending on project size. A Magento 2 site with sample data might take 48 seconds initially, 12 seconds on subsequent starts. If sync takes longer than a minute, you're likely syncing large files or directories unnecessarily.
node_modules DirectoriesFrontend build tools create massive node_modules directories that slow Mutagen sync significantly. Solution: Add node_modules to upload_dirs:
upload_dirs: #upload_dirs entries are relative to docroot
- sites/default/files # Keep existing CMS defaults
- ../node_modules # Exclude from Mutagen
Then run ddev restart. The directory remains available in the container via Docker bind-mount.
If you change files (checking out branches, running git pull, deleting files) while DDEV is stopped, Mutagen has no awareness of these changes. When you start again, it may restore old files from the volume.
Solution: Run ddev mutagen reset before restarting if you've made significant changes while stopped. That removes the volume so everything comes first from the host side in a fresh sync.
If the same file changes on both host and container while out of sync, conflicts can occur. This is quite rare but possible with:
npm install or yarn install operationsBest practices:
ddev composer for most composer operationsddev mutagen sync after major Git branch changesddev mutagen sync after manual Composer operations done inside the containerMutagen increases disk usage because project code exists both on your computer and inside a Docker volume. The upload_dirs directories are excluded to mitigate this.
Watch for volumes larger than 5GB (warning) or 10GB (critical). Use ddev utility mutagen-diagnose --all to check all projects.
ddev utility mutagen-diagnose CommandDDEV now includes a diagnostic tool that automatically checks for common issues:
ddev utility mutagen-diagnose
This command analyzes:
node_modules and other large directories being syncedUse --all flag to analyze all Mutagen volumes system-wide:
ddev utility mutagen-diagnose --all
The diagnostic provides actionable recommendations like:
⚠ 3 node_modules directories exist but are not excluded from sync (can cause slow sync)
→ Add to .ddev/config.yaml:
upload_dirs:
- sites/default/files
- web/themes/custom/mytheme/node_modules
- web/themes/contrib/bootstrap/node_modules
- app/node_modules
→ Then run: ddev restart
If ddev start takes longer than a minute and ddev utility mutagen-diagnose doesn't give you clues about why, watch what Mutagen is syncing:
ddev mutagen reset # Start from scratch
ddev start
In another terminal:
while true; do ddev mutagen st -l | grep "^Current"; sleep 1; done
This shows which files Mutagen is working on:
Current file: vendor/bin/large-binary (306 MB/5.2 GB)
Current file: vendor/bin/large-binary (687 MB/5.2 GB)
Current file: vendor/bin/large-binary (1.1 GB/5.2 GB)
Add problem directories to upload_dirs or move them to .tarballs (automatically excluded).
Watch real-time sync activity:
ddev mutagen monitor
This shows when Mutagen responds to changes and helps identify sync delays.
Force an explicit sync:
ddev mutagen sync
Check sync status:
ddev mutagen status
View detailed status:
ddev mutagen status -l
Verify that your project works without Mutagen:
ddev config --performance-mode=none && ddev restart
Run diagnostics:
ddev utility mutagen-diagnose
Reset to clean .ddev/mutagen/mutagen.yml:
# Backup customizations first
mv .ddev/mutagen/mutagen.yml .ddev/mutagen/mutagen.yml.bak
ddev restart
Reset Mutagen volume and recreate it:
ddev mutagen reset
ddev restart
Enable debug output:
DDEV_DEBUG=true ddev start
View Mutagen logs:
ddev mutagen logs
Restart Mutagen daemon:
ddev utility mutagen daemon stop
ddev utility mutagen daemon start
Recommended approach: Use upload_dirs in .ddev/config.yaml:
upload_dirs:
- sites/default/files # CMS uploads
- ../node_modules # Build dependencies
- ../vendor/bin # Large binaries
Advanced approach: Edit .ddev/mutagen/mutagen.yml after removing the #ddev-generated line:
ignore:
paths:
- "/web/themes/custom/mytheme/node_modules"
- "/vendor/large-package"
Then add corresponding bind-mounts in .ddev/docker-compose.bindmount.yaml:
services:
web:
volumes:
- "../web/themes/custom/mytheme/node_modules:/var/www/html/web/themes/custom/mytheme/node_modules"
Always run ddev mutagen reset after changing mutagen.yml.
Add .git/hooks/post-checkout and make it executable:
#!/usr/bin/env bash
ddev mutagen sync || true
chmod +x .git/hooks/post-checkout
performance_modeThe standard practice is to use global configuration for performance_mode so that each user does what's normal for them, and the project configuration does not have configuration that might not work for another team member.
Most people don't have to change this anyway; macOS and traditional Windows default to performance_mode: mutagen and Linux/WSL default to performance_mode: none.
Disable Mutagen if:
Disable per-project:
ddev mutagen reset && ddev config --performance-mode=none && ddev restart
Disable globally:
ddev config global --performance-mode=none
DDEV uses its own Mutagen installation, normally in ~/.ddev, but using $XDG_CONFIG_HOME when that is defined.
$HOME/.ddev/bin/mutagen or ${XDG_CONFIG_HOME}/ddev/bin/mutagen$HOME/.ddev_mutagen_data_directoryAccess Mutagen directly:
ddev utility mutagen sync list
ddev utility mutagen sync monitor <projectname>
Mutagen provides dramatic performance improvements for macOS and traditional Windows users, but understanding its asynchronous nature is key to avoiding issues:
ddev utility mutagen-diagnose as your first debugging stepupload_dirs to exclude large directories like node_modules or heavy user-generated files directoriesddev mutagen reset after file changes when DDEV is stoppedddev mutagen monitor when troubleshootingThe benefits far outweigh the caveats for most projects, especially with the new diagnostic tools that identify and help resolve common issues automatically.
For more information, see the DDEV Performance Documentation and the Mutagen documentation.
Copilot was used to create an initial draft for this blog, and for subsequent reviews.
read moreI use Claude Code almost exclusively. Every day, for hours. It allowed me to get back into developing great tools, and I have published several results that are working very well. Plugins, skills, frameworks, development workflows. Real things that real people can use. The productivity is undeniable.
So let me be clear about what this post is. This is not a take on what AI can do. This is about AI doing it completely alone.
The results are there. But under supervision.
When we were building laollita.es, something happened that I documented in a previous post. We needed to apply some visual changes to the site. The AI agent offered a solution: a custom module with a preprocess function. It would work. Then we iterated, and it moved to a theme-level implementation with a preprocess function. That would also work. Both approaches would accomplish the goal.
Until I asked: isn't it easier to just apply CSS to the new classes?
Yes. It was. Simple CSS. No module, no preprocess, no custom code beyond what was needed.
Here is what matters. All three solutions would have accomplished the goal. The module approach, the theme preprocess, the CSS. They all would have worked. But two of them create technical debt and maintenance load that was completely unnecessary. The AI did not choose the simplest path because it does not understand the maintenance burden. It does not think about who comes after. It generates a solution that works and moves on.
This is what I see every time I let the AI make decisions without questioning them. It works... and it creates problems you only discover later.
I have been thinking about this for a while. I have my own theories, and they keep getting confirmed the more I work with these tools. Here is what I think is going on.
Eddie Chu made this point at the latest AI Tinkerers meeting, and it resonated with me because I live it every day.
I use frameworks. Skills. Plugins. Commands. CLAUDE.md files. I have written before about my approach to working with AI tools. I have built an entire organization of reference documents, development guides, content frameworks, tone guides, project structure plans. All of this exists to create guardrails, to force best practices, to give AI the context it needs to do good work.
And it will not keep the memory.
We need to force it. Repeat it. Say it again.
This is not just about development. It has the same problem when creating content. I built a creative brief step into my workflow because the AI was generating content that reflected its own patterns instead of my message. I use markdown files, state files, reference documents, the whole structure in my projects folder. And still, every session starts from zero. The AI reads what it reads, processes what it processes, and the rest... it is as if it never existed.
The Expo.dev engineering team described this perfectly after using Claude Code for a month [1]. They said the tool "starts fresh every session" like "a new hire who needs onboarding each time." Pre-packaged skills? It "often forgets to apply them without explicit reminders." Exactly my experience.
Here is something I have noticed repeatedly. In a chat interaction, in agentic work, the full history is the context. Everything that was said, every mistake, every correction, every back-and-forth. That is what the AI is working with.
When the AI is already confused and I have asked for the same correction three times and it is going in strange ways... starting a new session and asking it to analyze the code fresh, to understand what is there, it magically finds the solution.
Why? Because the previous mistakes are in the context. The AI does not read everything from top to bottom. It scans for what seems relevant, picks up fragments, skips over the rest. Which means even the guardrails I put in MD files, the frameworks, the instructions... they are not always read. They are not always in the window of what the AI is paying attention to at that moment.
And when errors are in the context, they compound. Research calls this "cascading failures" [2]. A small mistake becomes the foundation for every subsequent decision, and by the time you review the output, the error has propagated through multiple layers. An inventory agent hallucinated a nonexistent product, then called four downstream systems to price, stock, and ship the phantom item [3]. One hallucinated fact, one multi-system incident.
Starting fresh clears the poison. But an unsupervised agent never gets to start fresh. It just keeps building on what came before.
The Dunning-Kruger effect is a cognitive bias where people with limited ability in a task overestimate their competence. AI has its own version of this.
When we ask AI to research, write, or code something, it typically responds with "this is done, production ready" or some variation of "this is done, final, perfect!" But it is not. And going back to the previous point, that false confidence is now in the context. So no matter if you discuss it later and explain what was wrong or that something is missing... it is already "done." If the AI's targeted search through the conversation does not bring the correction back into focus... there you go.
Expo.dev documented the same pattern [1]. Claude "produces poorly architected solutions with surprising frequency, and the solutions are presented with confidence." It never says "I am getting confused, maybe we should start over." It just keeps going, confidently wrong.
The METR study puts hard numbers on this [4]. In a randomized controlled trial with experienced developers, AI tools made them 19% slower. Not faster. Slower. But the developers still believed AI sped them up by 20%. The perception-reality gap is not just an AI problem. It is a human problem too. Both sides of the equation are miscalibrated.
The information or memory that AI has is actually not all good. Usually it is "cowboy developers" who really go ahead and respond to most social media questions, Stack Overflow answers, blog posts, tutorials. And that is the training. That is the information AI learned from.
The same principle applies beyond code. The information we produce as a society is biased, and AI absorbs all of it. That is why you see discriminatory AI systems across industries. AI resume screeners favor white-associated names 85% of the time [5]. UnitedHealthcare's AI denied care and was overturned on appeal 90% of the time [6]. A Dutch algorithm wrongly accused 35,000 parents of fraud, and the scandal toppled the entire government [7].
For my own work, I create guides to counteract this. Content framework guides that extract proper research on how to use storytelling, inverted pyramid, AIDA structures. Tone guides with specific instructions. I put them in skills and reference documents so I can point the AI to them when we are working. And still I have to remind it. Every time.
I have seen AI do what it did in laollita.es across multiple projects. In development, it created an interactive chat component, and the next time we used it on another screen, it almost wrote another one from scratch instead of reusing the one it had just built. Same project. Same session sometimes.
In content creation, I have a tone guide with specific stylistic preferences. And I still have to explicitly ask the AI to review it. No matter how directive the language in the instructions is. "Always load this file before writing content." It does not always load the file.
And it is not just my experience.
A Replit agent deleted a production database during a code freeze, then fabricated fake data and falsified logs to cover it up [8]. Google's Antigravity agent wiped a user's entire hard drive when asked to clear a cache [9]. Klarna's CEO said "we went too far" after cutting 700 jobs for AI and is now rehiring humans [10]. Salesforce cut 4,000 support staff and is now facing lost institutional knowledge [11]. The pattern keeps repeating. Companies trust the agent, remove the human, discover why the human was there in the first place.
I am not against AI. I am writing this post on a system largely built with AI assistance. The tools I publish, the workflows I create, the content I produce. AI is deeply embedded in my work. It makes me more productive.
At Palcera, I believe AI is genuinely great for employees and companies. When AI helps a developer finish faster, that time surplus benefits everyone. The developer gets breathing room. The company gets efficiency. And the customer can get better value, better pricing, faster delivery. That is real. I see it every day.
But all of that requires the human in the loop. Questioning the choices. Asking "isn't CSS simpler?" Clearing the context when things go sideways. Pointing to the tone guide when the AI forgets. Starting fresh when the conversation gets poisoned with old mistakes.
The results are there. But under supervision. And that distinction matters more than most people realize.
[1] Expo.dev, "What Our Web Team Learned Using Claude Code for a Month"
[2] Adversa AI, "Cascading Failures in Agentic AI: OWASP ASI08 Security Guide 2026"
[3] Galileo, "7 AI Agent Failure Modes and How To Fix Them"
[4] METR, "Measuring the Impact of Early-2025 AI on Experienced Developer Productivity"
[5] The Interview Guys / University of Washington, "85% of AI Resume Screeners Prefer White Names"
[6] AMA, "How AI Is Leading to More Prior Authorization Denials"
[7] WBUR, "What Happened When AI Went After Welfare Fraud"
[8] The Register, "Vibe Coding Service Replit Deleted Production Database"
[9] The Register, "Google's Vibe Coding Platform Deletes Entire Drive"
[10] Yahoo Finance, "After Firing 700 Humans For AI, Klarna Now Wants Them Back"
[11] Maarthandam, "Salesforce Regrets Firing 4,000 Experienced Staff and Replacing Them with AI"
Our webinar on Drupal CMS + Meridian theme is up on YouTube! In this we talked about the new theme, demo’d various example sites built with it, and ran through new components.
We also talked about our differences with Drupal CMS’s built in Byte theme and site template.
Enjoy!
read more1xINTERNET and React online have joined forces, React Online will become the Dutch subsidiary of 1xINTERNET. Same great people, same trusted partnerships, now backed by a team of 90+ experts across Europe.
read moreIntegrating AI with Drupal content creation works well for text fields, but taxonomy mapping remains a significant challenge. AI extracts concepts using natural language, while Drupal taxonomies require exact predefined terms and the two rarely match. This article explores why common approaches like string matching and keyword mapping fail, and presents context injection as a production-proven solution that leverages AI’s semantic understanding to select correct taxonomy terms directly from the prompt.
read moreFor the past months, the AI Initiative Leadership Team has been working with our contributing partners to define what the Drupal AI initiative should focus on in 2026. That plan is now ready, and I want to share it with the community.
This roadmap builds directly on the strategy we outlined in Accelerating AI Innovation in Drupal. That post described the direction. This plan turns it into concrete priorities and execution for 2026.
The full plan is available as a PDF, but let me explain the thinking behind it.
Producing consistently high-quality content and pages is really hard. Excellent content requires a subject matter expert who actually knows the topic, a copywriter who can translate expertise into clear language, someone who understands your audience and brand, someone who knows how to structure pages with your component library, good media assets, and an SEO/AEO specialist so people actually discover what you made.
Most organizations are missing at least some of these skillsets, and even when all the people exist, coordinating them is where everything breaks down. We believe AI can fill these gaps, not by replacing these roles but by making their expertise available to every content creator on the team.
For large organizations, this means stronger brand consistency, better accessibility, and improved compliance across thousands of pages. For smaller ones, it means access to skills that were previously out of reach: professional copywriting, SEO, and brand-consistent design without needing a specialist for each.
Used carelessly, AI just makes these problems worse by producing fast, generic content that sounds like everything else on the internet. But used well, with real structure and governance behind it, AI can help organizations raise the bar on quality rather than just volume.
Drupal has always been built around the realities of serious content work: structured content, workflows, permissions, revisions, moderation, and more. These capabilities are what make quality possible at scale. They're also exactly the foundation AI needs to actually work well.
Rather than bolting on a chatbot or a generic text generator, we're embedding AI into the content and page creation process itself, guided by the structure, governance, and brand rules that already live in Drupal.
For website owners, the value is faster site building, faster content delivery, smarter user journeys, higher conversions, and consistent brand quality at scale. For digital agencies, it means delivering higher-quality websites in less time. And for IT teams, it means less risk and less overhead: automated compliance, auditable changes, and fewer ad hoc requests to fix what someone published.
We think the real opportunity goes further than just adding AI to what we already have. It's also about connecting how content gets created, how it performs, and how it gets governed into one loop, so that what you learn from your content actually shapes what you build next.
The things that have always made Drupal good at content are the same things that make AI trustworthy. That is not a coincidence, and it's why we believe Drupal is the right place to build this.
The 2026 plan identifies eight capabilities we'll focus on. Each is described in detail in the full plan, but here is a quick overview:
These eight capabilities are where the official AI Initiative is focusing its energy, but they're not the whole picture for AI in Drupal. There is a lot more we want to build that didn't make this initial list, and we expect to revisit the plan in six months to a year.
We also want to be clear: community contributions outside this scope are welcome and important. Work on migrations, chatbots, and other AI capabilities continues in the broader Drupal community. If you're building something that isn't in our 2026 plan, keep going.
Over the past year, we've brought together organizations willing to contribute people and funding to the AI initiative. Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors. That is over 50 individual contributors working across time zones and disciplines.
Coordinating 50+ people across organizations takes real structure, so we've hired two dedicated teams from among our partners:
Both teams are creating backlogs, managing issues, and giving all our contributors clear direction. You can read more about how contributions are coordinated.
This is a new model for Drupal. We're testing whether open source can move faster when you pool resources and coordinate professionally.
If you're a contributing partner, we're asking you to align your contributions with this plan. The prioritized backlogs are in place, so pick up something that fits and let's build.
If you're not a partner but want to contribute, jump in. The prioritized backlogs are open to everyone.
And if you want to join the initiative as an official partner, we'd absolutely welcome that.
This plan wasn't built in a room by itself. It's the result of collaboration across 28 sponsoring organizations who bring expertise in UX, core development, QA, marketing, and more. Thank you.
We're building something new for Drupal, in a new way, and I'm excited to see where it goes.
— Dries Buytaert
For the past months, the AI Initiative Leadership Team has been working with our contributing partners to define what the Drupal AI initiative should focus on in 2026. That plan is now ready, and I want to share it with the community.
This roadmap builds directly on the strategy we outlined in Accelerating AI Innovation in Drupal. That post described the direction. This plan turns it into concrete priorities and execution for 2026.
The full plan is available as a PDF, but let me explain the thinking behind it.
Producing consistently high-quality content and pages is really hard. Excellent content requires a subject matter expert who actually knows the topic, a copywriter who can translate expertise into clear language, someone who understands your audience and brand, someone who knows how to structure pages with your component library, good media assets, and an SEO/AEO specialist so people actually discover what you made.
Most organizations are missing at least some of these skillsets, and even when all the people exist, coordinating them is where everything breaks down. We believe AI can fill these gaps, not by replacing these roles but by making their expertise available to every content creator on the team.
For large organizations, this means stronger brand consistency, better accessibility, and improved compliance across thousands of pages. For smaller ones, it means access to skills that were previously out of reach: professional copywriting, SEO, and brand-consistent design without needing a specialist for each.
Used carelessly, AI just makes these problems worse by producing fast, generic content that sounds like everything else on the internet. But used well, with real structure and governance behind it, AI can help organizations raise the bar on quality rather than just volume.
Drupal has always been built around the realities of serious content work: structured content, workflows, permissions, revisions, moderation, and more. These capabilities are what make quality possible at scale. They're also exactly the foundation AI needs to actually work well.
Rather than bolting on a chatbot or a generic text generator, we're embedding AI into the content and page creation process itself, guided by the structure, governance, and brand rules that already live in Drupal.
For website owners, the value is faster site building, faster content delivery, smarter user journeys, higher conversions, and consistent brand quality at scale. For digital agencies, it means delivering higher-quality websites in less time. And for IT teams, it means less risk and less overhead: automated compliance, auditable changes, and fewer ad hoc requests to fix what someone published.
We think the real opportunity goes further than just adding AI to what we already have. It's also about connecting how content gets created, how it performs, and how it gets governed into one loop, so that what you learn from your content actually shapes what you build next.
The things that have always made Drupal good at content are the same things that make AI trustworthy. That is not a coincidence, and it's why we believe Drupal is the right place to build this.
The 2026 plan identifies eight capabilities we'll focus on. Each is described in detail in the full plan, but here is a quick overview:
These eight capabilities are where the official AI Initiative is focusing its energy, but they're not the whole picture for AI in Drupal. There is a lot more we want to build that didn't make this initial list, and we expect to revisit the plan in six months to a year.
We also want to be clear: community contributions outside this scope are welcome and important. Work on migrations, chatbots, and other AI capabilities continues in the broader Drupal community. If you're building something that isn't in our 2026 plan, keep going.
Over the past year, we've brought together organizations willing to contribute people and funding to the AI initiative. Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors. That is over 50 individual contributors working across time zones and disciplines.
Coordinating 50+ people across organizations takes real structure, so we've hired two dedicated teams from among our partners:
Both teams are creating backlogs, managing issues, and giving all our contributors clear direction. You can read more about how contributions are coordinated.
This is a new model for Drupal. We're testing whether open source can move faster when you pool resources and coordinate professionally.
If you're a contributing partner, we're asking you to align your contributions with this plan. The prioritized backlogs are in place, so pick up something that fits and let's build.
If you're not a partner but want to contribute, jump in. The prioritized backlogs are open to everyone.
And if you want to join the initiative as an official partner, we'd absolutely welcome that.
This plan wasn't built in a room by itself. It's the result of collaboration across 28 sponsoring organizations who bring expertise in UX, core development, QA, marketing, and more. Thank you.
We're building something new for Drupal, in a new way, and I'm excited to see where it goes.
— Dries Buytaert
The Drupal AI Initiative officially launched in June 2025 with the release of the Drupal AI Strategy 1.0 and a shared commitment to advancing AI capabilities in an open, responsible way. What began as a coordinated effort among a small group of committed organizations has grown into a substantial, sponsor-funded collaboration across the Drupal ecosystem.
Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors representing over 50 individual contributors working across time zones and disciplines. Together, sponsors have committed more than $1.5 million in combined cash and in-kind contributions to move Drupal AI forward.
The initiative now operates across multiple focused areas, including leadership, marketing, UX, QA, core development, innovation, and product development. Contributors are not only exploring what’s possible with AI in Drupal, but are building capabilities designed to be stable, well-governed, and ready for real-world adoption in Drupal CMS.
Eight months in, this is more than a collection of experiments. It is a coordinated, community-backed investment in shaping how AI can strengthen content creation, governance, and measurable outcomes across the Drupal platform.
As outlined in the 2026 roadmap, this year focuses on delivering eight key capabilities that will shape how AI works in Drupal CMS. Achieving that level of focus and quality requires more than enthusiasm and good ideas. It requires coordination at scale.
From the beginning, sponsors contributed both people and funding so the initiative could be properly organized and managed. With 28 organizations contributing more than 50 people across multiple workstreams, it was clear that sustained progress would depend on dedicated delivery management to align priorities, organize backlogs, support contributors, and maintain predictable execution.
To support this growth, the initiative ran a formal Request for Proposal (RFP) process to select delivery management partners to help coordinate work across both innovation and product development workstreams. This was not a shift in direction, but a continuation of our original commitment: to build AI capabilities for Drupal in a way that is structured, sustainable, and ready for real-world adoption.
To identify the right delivery partners, we launched the RFP process in October 2025 at DrupalCon Vienna. The RFP was open exclusively to sponsors of the Drupal AI Initiative. From the start, our goal was to run a process that reflected the responsibility we carry as a sponsor-funded, community-driven initiative.
The timeline included a pre-proposal briefing, an open clarification period, and structured review and interview phases. Proposals were independently evaluated against clearly defined criteria tailored to both innovation and production delivery. These criteria covered governance, roadmap and backlog management, delivery approach, quality assurance, financial oversight, and demonstrated experience contributing to Drupal and AI initiatives.
Following an independent review, leadership held structured comparison sessions to discuss scoring, explore trade-offs, clarify open questions, and ensure decisions were made thoughtfully and consistently. Final discussions were held with shortlisted vendors in December, and contracts were awarded in early January.
The selected partners are engaged for an initial six-month period. At the end of that term, the RFP process will be repeated.
This process was designed not only to select capable partners but to steward sponsor contributions responsibly and align with Drupal’s values of openness, collaboration, and accountability.
Following the structured selection process, two contributing partners were selected to support delivery across the initiative’s key workstreams.
QED42 will focus on the Innovation workstream, helping coordinate forward-looking capabilities aligned with the 2026 roadmap. QED42 has been an active contributor to Drupal AI efforts from the earliest stages and has played a role in advancing AI adoption across the Drupal ecosystem. Their contributions to initiatives such as Drupal Canvas AI, AI-powered agents, and other community-driven efforts demonstrate both technical depth and a strong commitment to open collaboration. In this role, QED42 will support structured experimentation, prioritization, and delivery alignment across innovation work.
1xINTERNET will lead the Product Development workstream, supporting the transition of innovation into stable, production-ready capabilities within Drupal CMS. As a founding sponsor and co-leader within the initiative, 1xINTERNET brings deep experience in distributed Drupal delivery and governance. Their longstanding involvement in Drupal AI and broader community leadership positions them well to guide roadmap execution, release planning, backlog coordination, and predictable productization.
We are grateful to QED42 and 1xINTERNET for their continued commitment to the initiative and for stepping into this role in service of the broader Drupal community. We also want to acknowledge the strong level of interest in this RFP and the high standard of submissions received, and to thank all participating organizations for the time, thought, and care invested in the process. The level of interest and quality of submissions reflect the caliber of agencies and contributors engaged in advancing Drupal AI.
Both organizations were selected not only for their delivery expertise but for their demonstrated investment in Drupal AI and their alignment with the initiative’s goals. Their role is to support coordination, roadmap alignment, and disciplined execution across contributors, ensuring that sponsor investment and community effort translate into tangible, adoptable outcomes.
Contracts began in early January. Two development sprints have already been completed, and a third sprint is now underway, establishing a clear and predictable delivery cadence.
QED42 and 1xINTERNET will share more details about their processes and early progress in an upcoming blog post.
With the 2026 roadmap now defined and structured delivery teams in place, the Drupal AI Initiative is positioned to execute with greater clarity and focus. The eight capabilities outlined in the one-year plan provide direction. Dedicated delivery management provides the coordination needed to turn that direction into measurable progress.
Predictable sprint cycles, clearer backlog management, and improved cross-workstream alignment allow contributors to focus on building, refining, and shipping capabilities that can be adopted directly within Drupal CMS. Sponsor investment and community contribution are now supported by a delivery model designed for scale and sustainability.
This next phase is about disciplined execution. It means shipping stable, well-governed AI capabilities that site owners can enable with confidence. It means connecting innovation to production in a way that reflects Drupal’s strengths in structure, governance, and long-term maintainability.
We are grateful to the sponsors and contributors who have made this possible. As agencies and organizations continue to join the initiative, we remain committed to transparency, collaboration, and delivering meaningful value to the broader Drupal community.
We are entering a year of focused execution, and we are ready to deliver.
The Drupal AI Initiative is built on collaboration. Sponsors contribute funding and dedicated team members. Contributors bring expertise across UX, core development, QA, marketing, innovation, and production. Leadership provides coordination and direction. Together, this shared investment makes meaningful progress possible.
We extend our thanks to the 28 sponsoring organizations and the more than 50 contributors who are helping shape the future of AI in Drupal. Their commitment reflects a belief that open source can lead in building AI capabilities that are stable, governed, and built for real-world use.
As we move into 2026, we invite continued participation. Contributing partners are encouraged to align their work with the roadmap and engage in the active workstreams. Organizations interested in joining the initiative are welcome to connect and explore how they can contribute.
We have laid the foundation. The roadmap is clear. Structured delivery is in place. With continued collaboration, we are well-positioned to deliver meaningful AI capabilities for the Drupal community and the organizations it serves.
The Drupal Association engineering team is announcing the end of life (EOL) of the first generation of the Automatic Update API, which relies on a different original signing solution for update validation than later versions.
Drupal.org’s APIs for Automatic Updates 7.x-1.x and 8.x-1.x will be discontinued on May 4th, 2026. These versions of automatic updates have been unsupported since the versions of Drupal core they are compatible with, 7 and 8, became unsupported.
Release contents hash files (example) will not be updated and will expire May 12th, 2026. They may be removed after this date with no notice.
In place updates (example) will no longer be generated after May 4th, 2026. These are generated on demand and existing update files will be removed.
APIs for supported versions of Automatic Updates will continue to be supported indefinitely.
Automatic Updates v1 was an important early step toward improving the safety and reliability of Drupal updates. However, its underlying signing and validation model has now been superseded by a more robust and secure approach, with TUF and Rugged.
If you are still using Automatic Updates under the 7.x-1.x or 8.x-1.x branches, now is the time to plan your update to a supported version, or implement custom updates using the supported API with your own CI, etc. Doing so ensures continued support, improved security, and alignment with Drupal’s long-term update strategy.
As DrupalCon Chicago 2026 draws closer, conversations about community are extending beyond sessions, socials, and contributions to include how we care for one another in shared spaces. The Drupal Community Working Group's Community Health Team has been working with event organizers to gather practical, community-informed health and safety guidance that reflects how people actually experience DrupalCon.
The information below provides resources for navigating the conference, the venue, and the city with confidence, while reinforcing Drupal's longstanding commitment to an inclusive, respectful, and supportive community where everyone can show up as their whole selves.
Have questions or concerns about DrupalCon Chicago? Feel free to drop by the Community Working Group's public office hours this Friday, February 13 at 10am ET / 1200 UTC.
Join the #community-health Drupal Slack channel for more information. A meeting link will be posted there a few minutes before office hours.
Updated: February 10, 2026
The information that was previously provided here has been moved to the DrupalCon Chicago Health & Safety page.
For more details, or if these policies are updated, please go to the DrupalCon Chicago official page:
DrupalCon Chicago Health & Safety
The Health and Safety page originated from discussions among the CWG Community Health Team and the DrupalCon Steering Committee after reviewing event websites from other communities in North America. We found the general health and safety information useful and we are working on creating a template for the Planning Drupal Events Playbook for other Drupal events to use moving forward.
The information we gathered for the DrupalCon Chicago Health & Safety page was inspired by the Linux Foundation's Open Source Summit Minneapolis Health & Safety page, APHA Health & Safety page, American Geophysical Union Safety and Security Guidance page, and DjangoCon Travel info page.
Friday, February 20 at Florida DrupalCamp in Orlando and Thursday, March 12 at DrupalCamp NJ in Princeton.
Artificial intelligence is reshaping how we build websites and create content — and the Drupal AI ecosystem is making it easier than ever for site builders to harness that power responsibly.
If you've been curious about integrating AI into your Drupal workflow but aren't sure where to start, this is the workshop for you.
This full-day, hands-on workshop designed for beginners who want to learn the fundamentals of using AI within Drupal. Over the course of the day, you'll work directly with key modules in the Drupal AI ecosystem — including AI Automators, Field Widget Actions, and AI Agents — gaining practical experience with setup, configuration, and real-world content generation techniques.
The emphasis throughout is on responsible AI usage: leveraging these tools to assist (not replace) your effectiveness and efficiency as a developer or content author. You'll explore various setup options, companion modules for auditing and exploring AI capabilities, and walk away with hands-on experience generating content in a thoughtful, responsible manner.
This workshop is aimed at Drupal site builders at the beginner level. No prior Drupal AI experience is necessary. If you can navigate the Drupal admin interface and have a basic understanding of AI prompt engineering, you're ready to dive in.
Basic knowledge of AI prompt engineering, basic Drupal site-building skills, and a paid API account with an AI provider (OpenAI, Gemini, or Anthropic recommended). Alternatively, a free 30-day trial with the Amazee.ai AI provider is available.
Mike Anello (@ultimike) has been teaching Drupal professionally for over 15 years. As co-founder and lead instructor at DrupalEasy, he runs several well-known training programs including Drupal Career Online, Professional Module Development, and Professional Single Directory Components. Mike is a frequent presenter at Drupal events across the United States and Europe, and is deeply involved in the Drupal community as an organizer, code contributor, and documentation contributor. You'll be learning from one of the most experienced Drupal educators in the community.
This full day workshop is being offered at two upcoming DrupalCamps on the US East Coast:
Registration for both events is now open, and space is limited. Don't wait to secure your spot.
Know a colleague, client, or friend who's been wanting to explore AI in Drupal? Please share this article with anyone who might benefit from a hands-on, beginner-friendly introduction to the Drupal AI module ecosystem. The more people in the Drupal community who understand how to use AI responsibly, the stronger our ecosystem becomes.