Ibexa DXP Discussions

Community discussion forum for developers working with Ibexa DXP

Dynamic Content Caching [render_esi]


I’m building a site wide alerts notification feature on my site. So essentially, I created a new content_type called ‘alerts’ {id, title, message, status} and then storing those ‘alerts’ objects in a folder called ‘Alerts’. This way we will have pre-written content for Holidays, Closures, etc. Eventually, I hope that these objects can be “scheduled” to be published/unpublished in version 2.5. Currently, each alert object has a ezboolean field type that is used to flag if a message should be displayed.


What I am trying to get right is the render_esi for caching just this message at the header of EACH page on the site. So this render_esi lives in the page_layout.html.twig. These messages could stay online for days or minutes depending on the situation. The documentation I’ve found online with ezplatform describes doing something like this for a menu where any new content objects in that menu query would purge the cache and rebuild the menu. The way it’s done is through the $response->headers->set('X-Location-Id', 123); where 123 is very vague to me. Should “123” be replaced with my parent ‘Alerts’ folder location id or can I just hard code it to something like ‘123’ and leave it? In this case I would call it ‘alerts’ instead of ‘123’.

Current implementation:

In my query type I’m limiting the result to 1 item. It doesn’t make sense to allow multiple messages to be displayed at the same time. At least not until we need it to do that which I’m sure will happen one day. I guess I could just use the alert id of the search query alert object that is returned so any edits to that object would be purged. However, I’m curious how you would implement this with multiple results from the search. You could add a locationsArray and store each objects location and then implode the array into the X-Location-ID $response->headers->set('X-Location-Id', implode(',',$locationIds)); however it would not pick up any new changes until the esi / page TTL times out. My guess would be to use a parent ID if all the objects are coming from a common parent location OR to use a made up ID you make up like ‘foobarMenu’.


{{ render_esi(controller('AppBundle:Alert:getActiveAlerts')) }}


namespace AppBundle\QueryType;

use eZ\Publish\Core\QueryType\QueryType;
use eZ\Publish\API\Repository\Values\Content\Query;
use eZ\Publish\API\Repository\Values\Content\Query\SortClause;

class AlertQueryType implements QueryType
     * Return only 1 active alert
     * @param array $parameters
     * @return Query
    public function getQuery(array $parameters = [])
        $options = [];

        $criteria = [
            new Query\Criterion\ParentLocationId( 61 ),
            new Query\Criterion\ContentTypeIdentifier( 'alert' ),
            new Query\Criterion\Field('active', Query\Criterion\Operator::EQ, true),

        $options['filter'] = new Query\Criterion\LogicalAnd($criteria);
        $options['sortClauses'] = [new SortClause\DatePublished(Query::SORT_DESC)];
        $options['limit'] = 1;

        return new Query($options);

    public static function getName()
        return 'AppBundle:Alert';

     * Returns array of required parameters
     * @return array
    public function getSupportedParameters()
        return [];



namespace AppBundle\Controller;

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Templating\EngineInterface;
use eZ\Publish\API\Repository\SearchService;
use AppBundle\QueryType\AlertQueryType;

class AlertController
    /** @var \Symfony\Bundle\TwigBundle\TwigEngine */
    protected $templating;

    /** @var \eZ\Publish\API\Repository\SearchService */
    protected $searchService;

    /** @var \AppBundle\QueryType\AlertQueryType */
    protected $alertQueryType;

     * @param \Symfony\Component\Templating\EngineInterface $templating
     * @param \eZ\Publish\API\Repository\SearchService $searchService
     * @param \AppBundle\QueryType\AlertQueryType $alertQueryType
    public function __construct(
        EngineInterface $templating,
        SearchService $searchService,
        AlertQueryType $alertQueryType
    ) {
        $this->templating = $templating;
        $this->searchService = $searchService;
        $this->alertQueryType = $alertQueryType;

     * Renders site wide alert
     * @return Response
    public function getActiveAlertsAction()
        $query = $this->alertQueryType->getQuery();
        $alertSearchResults = $this->searchService->findContent($query);

        $alertItems = [];
        foreach($alertSearchResults->searchHits as $hit){
            $alertItems[] = $hit->valueObject;

        $response = new Response();
        $response->headers->set('X-Location-Id', 'siteAlerts');

        return $this->templating->renderResponse(
                'alertItems' => $alertItems

1 Like

Thinking about it, x-location-id must only accept location IDs to work properly. I’m guessing that’s how other objects get purged if there’s a relationship etc. Trying to find more info about this in docs.


The 123 should be location id of the “Alerts” folder. Trick is that when a new alert objects gets published eZ should by default clear all its own + parents related cached. If everything is configured correctly this will purge the ESI block tagged with the correct ‘X-Location-Id’.

This will work if you are republishing the object when you set the boolean to true.

Okay, for simplicity sake I just hardwired the “Alerts” folder location id into the X-Location-Id instead of trying to figure it out from the alerts object returned from the alerts querytype.

$response = new Response();
$response->headers->set('X-Location-Id', 71);

So hit the page, nothing appears since the alert object is disabled. In admin ui I switch the alert to enabled and hit the page and it’s empty so I believe it’s caching an empty response. 100 seconds later, boom its there. Modify the alert content object and publish and after another 100 second wait it’s there.

I’m using ezlaunchpad for local development. The **081 prod env the http_cache is working and so is **0082 which is running the dev environment. I also modified a folder object in the admin_ui [**0082] and the cache hasn’t been purged so now I’m thinking this is an ezlaunchpad specific issue with the varnish container.

It wasn’t clear to me in the documentation if ezplatform by default enables ESI and fragments.


    esi: ~
    fragments: ~

Symfony 3.4 documentation recommendations:

    esi: { enabled: true }
    fragments: { path: /_fragment }

ez http_cache documentation

                view_cache: true
                ttl_cache: true
                default_ttl: 60  

Do I have to enable view_cache and ttl_cache to be both true for each siteaccess or in the siteaccess group? Or are these enabled by default?

Hm, I am not sure if they are enabled by default, but the view_cache should be enabled for sure. You should also configure trusted proxies so eZ knows where is varnish. On varnish side you should support PURGE in the vcl.

I believe my issues with http_cache have more to do with how I’m using ezlaunchpad for local development.

At times I use firefox within my virtual machine so I’m using http://localhost:42082 to view the site and then I often will use firefox on my host machine and use http://10.x.x.x:42082 since it can load the site way faster due to it having more resources than the VM.

When I edit content using localhost:42082 and view the cache headers in Firefox using both host machine IP and localhost I can tell that varnish is treating the cache separately since the x-cache-hits totals are different amounts. This is due to the request header host being different. I’m also noticing that if I edit a content item in admin with port 42081 (non varnish), then try to edit the same content object later on port 42082 varnish can’t clear the cache and it becomes “lost”. I understand that port 82 is using varnish so port 81 wouldn’t be involved but shouldn’t a purge event just reset that url when it comes back from port 82. So in order to get everything back and working I’ve tried doing a bin/console fos:httpcache:invalidate:tag ez-all but it doesn’t work. I then have to run a ~/ez down && ~/ez create to “restart” varnish. If I leave the admin UI open during a this process I get error being cached in varnish. What I believe is happening is the admin ui continues to ping the notifications while the new containers are being created and it hits a composer error which gets cached in varnish. So I have to then re-run the ~/ez down && ~/ez create to get that error to be un-cached.

Fatal error: Uncaught Symfony\Component\Debug\Exception\FatalThrowableError: Call to a member function jsonSerialize() on null in /var/www/html/project/ezplatform/vendor/ocramius/proxy-manager/src/ProxyManager/GeneratorStrategy/EvaluatingGeneratorStrategy.php(54) : eval()'d code:57 Stack trace: #0 [internal function]: ProxyManagerGeneratedProxy_PM_\EzSystems\EzPlatformAdminUi\UI\Config\ConfigWrapper\Generated1e5e057e7039f764ead98a70fbef2e70->jsonSerialize() #1 /tmp/ezplatformcache/var/cache/prod/twig/0c/0c97d682a56e6d47b87222774405eeea2f75ef16f6464f54a70fdb6e1295e2dc.php(93): json_encode(Object(ProxyManagerGeneratedProxy_PM_\EzSystems\EzPlatformAdminUi\UI\Config\ConfigWrapper\Generated1e5e057e7039f764ead98a70fbef2e70)) #2 /var/www/html/project/ezplatform/vendor/twig/twig/lib/Twig/Template.php(390): __TwigTemplate_081f77a51d4c13124baa0d5e80ef674a756cf706a794c6b3cf5557a14fc472fc->doDisplay(Array, Array) #3 /var/www/html/project/ezplatform/vendor/twig/twig/lib/Twig/Template.php(367): Twig_Template->displayWithErrorHand in /var/www/html/project/ezplatform/vendor/ocramius/proxy-manager/src/ProxyManager/GeneratorStrategy/EvaluatingGeneratorStrategy.php(54) : eval()'d code on line 57

I guess what I need to do is pick where I want to run Firefox (Host or VM) and whether or not I need to be on port 82 (varnish) or 81 (symfony cache) and stay consistent and not switch between the two.

When you publish the object you should see the PURGE logged in the varnishlog. If not then something is not configured correctly.

I believe this is an ezlaunchpad issue now regarding the purge_type var being set in vhost or in the env.