Ibexa DXP Discussions

Community discussion forum for developers working with Ibexa DXP

Page builder - extremely slow query on some DB instances (eZP 2.5)

Hello all,
Long time, no see :slight_smile:
Hope everyone is well and healthy in these strange times!

We’re experiencing some problems with the following query on eZ Platform 2.5:

SELECT a.id, a.name, a.value, mab.block_id, mbz.zone_id FROM `ezpage_attributes` a LEFT JOIN `ezpage_map_attributes_blocks` mab ON mab.attribute_id = a.id LEFT JOIN `ezpage_map_blocks_zones` mbz ON mab.block_id = mbz.block_id LEFT JOIN
`ezpage_map_zones_pages` mzp ON mbz.zone_id = mzp.zone_id WHERE mzp.page_id = 516;

On some engines, it takes around ~1 minute to execute with only 2-25k of items in each table. It is a consistent behaviour. On other engines or instances, it executes superfast.

| id   | name      | value | block_id | zone_id |
| 1628 | contentId | 582   |     1594 |     516 |
| 1629 | contentId | 577   |     1595 |     516 |
| 1630 | contentId | 2892  |     1596 |     516 |
| 1631 | contentId | 585   |     1597 |     516 |
4 rows in set (49.30 sec)

We initially suspected a problem with AWS RDS (MariaDB 10.2), but now we’ve actually been able to replicate this on MariaDB 10.3 outside of RDS.

We noticed that on instances with slow execution, when we run EXPLAIN on the query, the „a” table is processed as first. On the instances with proper fast execution, „a” table is analysed as last.

When traffic grows, this query kills our DB…

Has anyone seen anything like that?
Is that query or its tables known to be not optimised or have improvements?


Hi Piotr!

I don’t know on which version you’re now, but at first glance, it looks like you might have forgotten to add missing indexes during your recent upgrades.
Please take a look at the following documentation page: https://doc.ezplatform.com/en/latest/updating/4_update_2.5/#page-builder


Hello Kamil,

Thank you so much for your hint, it looks like you may have nailed it. I obviously had checked if indexes were there but your link made me examine them one by one carefully. And indeed we were missing some. We still have to check if that helps in case of these very slow executions, but I think we’ve just identified the source of the bottleneck on the server.