It’s hard to imagine pushing the limits of object oriented PHP so far that your web servers choke, but the truth is those limits are reached faster than you think. We’ve run some tests over at Wufoo and it turns out that any sort of mass object creation is pretty much not going to work at scale. The problem is this limit on object creation forces developers to balance code consistency, which is desirable—especially for the old-schoolers, with performance. While replacing objects with arrays when possible makes things a little better, the most performance friendly approach involves appending strings. For your convenience, we’ve run some tests that measure page execution times and memory usage to create the following guideline to help you plan out what areas of your code may have to break away from an object oriented nature.

The Benchmarks

Basically, we set up a simple PHP page to iterate over a loop and create 1) a giant concatenated string, 2) an array of arrays containing the word ‘test’, and 3) an array of objects with one variable set to ‘test’.

Load Time Memory Used
11,000 strings15.7ms1.45mb
11,000 arrays26.6ms3.99mb
11,000 objects148.8ms7.70mb
25,000 arrays44.1ms7.25mb
1,500,000 strings253.2.6ms7.14mb

As the table shows, creating objects takes a good amount of memory and time when compared to a string or array. And since 7.7mb is nearing the default memory limit for PHP, the page is about to time out. Sure, the memory limit can be increased, but the point is that there is still a maximum. For most pages, 11,000 objects is overkill. But in some cases, like exporting a hefty database to CSV format or returning data from a public API, we may want to return 50,000+ records.


The good news is that the memory limitation does not put us in a place where the advantages of reusable code is completely removed. We just have to take a few precautions when writing code that has intensive looping or object creation.

  • unset() - Avoid getting at objects through the accessor, and work directly with the recordset loop. After each iteration, unset the object that was just created. After a quick test, 45,000 objects could be created and unset using 1.43mb of memory at peak.

  • Static Method Calls - A static method can be called without the need to instantiate an object. For example, if our desired output is an XML string, but we need some of the objects helper functions, we can still call functions off of the objects statically with Object::function(). Doing so only increased memory usage from 1.45mb to 1.46mb in the string example.

  • Paging - The downside to unset() and static methods is that we have to bypass our accessors to get at the objects and work directly with the loop. A third option is to be strict with paging, and never allow for more than X number of objects to be returned. This will keep the code 100% consistent, but may be a bit more intensive on the database and increase page execution time.

Why Does this Matter?

Many development techniques, such as domain driven design, keep a set of value objects along with an accessor to each of those objects. The accessors act as an API that make retrieval of the objects easy and consistent. For example, let’s say we have a User object, and a UserAccessor — we would call UserAccessor->loadAllUsers(), which would return to us an array of User objects. Then, we can loop through the array, and display a list of all users and have access to the member variables and functions.

This is great because all developers work with the same objects, and SQL queries are constrained to accessors. Even if this isn’t your preferred development style, chances are your object oriented approach strives to achieve similar levels of consistency. That said, we can see from the benchmarks above that there may be situations where a portion of code we’ve created cannot return an array of objects because of memory limitations. Overall, object memory usage is far from a deal breaker, but it is something every developer worried about scaling should be aware of.

For more information, here are a couple of links that may provide some more insight.

HTML Form Builder
Ryan Campbell

Object Oriented PHP Memory Concerns by Ryan Campbell

This entry was posted 2 years ago and was filed under Notebooks.
Comments are currently closed.


  1. Laurens · 2 years ago

    Doesn‘t the array in the array of objects test itself consume (at least some) load time and memory?

  2. Ryan Campbell · 2 years ago

    It does, but it is a small amount. I ran some tests and it seems that if you were to make 11,000 variables and assign an object to each variable instead of keeping in an array, the memory used in the object test would drop from 7.7mb to 7.36mb. That is small enough to still keep the data relevant, and also by returning an array of the objects the developers avoids working directly with the SQL query and the construction of objects. It’s a good point though—I wasn’t aware of the overhead on each array position when storing an object.

  3. Radoslav Stankov · 2 years ago

    When I need to have something link $users = UserAccessor->loadAllUsers() -> I don’t return array of object or something like that I return Iterator object who saves memory because is only one object. In 90% of the time I need foreach for the records so iterator is very handy here because I pull only the object I currently need. In the othe 10% I have toArray() method to convert Iterator to normal object

  4. Steve Clay · 2 years ago

    I assume by “a giant concatenated string” you mean an array of strings. It wouldn’t be fair to compare 11K objects vs. 11K arrays vs. 1 string. Among your solutions you could also add usage of the Flyweight pattern (when applicable).

    FWIW, I generally find that memory usage of a structure is roughly proportional to strlen(serialize($structure)). Realizing this makes you think twice about using long string keys in an array when you’re going to need 1000s of them. And instead of 1000 arrays with 5 keys each, it’s less memory to create 5 arrays with 1000 values each.

    At some point your optimization starts to affect maintainability and you’re better off throwing more memory at the problem (or reducing your paging limit).

  5. Hodicska Gergely · 2 years ago

    I think this test has not too much sense. Of course it is good practice to know precisely how things works behind the wall, but why is it useful to compare an object with a string or an array? Their purpose is much more different they are not comparable. And usually one will use only a few object so this overhead will not so big. And even the overhead is not important for most of the projects but the development time it is.

  6. Duodraco · 2 years ago

    Use of Iterators can minimize memory consume. If we talking of database access and PDO, we don’t fill our server with all data retrieved. Working this, we can iterate over the User Collection, once per time.

  7. Ryan Campbell · 2 years ago

    Steve, good tip on the strlen(serialize). That will make things a bit easier to run a quick check on in the future.

    Iterators are definitely an option also — similar to the paging mention above. I guess the difference being that iterators still only require one query. The main downside is the amount of work up front, and the flexibility needed if you don’t want the developer to have to deal with the construction of the object. But you’re right, when memory is a concern and full code reusability is wanted, an iterator may be the best approach.

    The whole point of including the string in the test is because it helps achieve the desired output. For example, if you need a CSV file, you can loop over a recordset and add on to a giant string to get your desired output. Obviously not the best approach, but worth noting just how fast it is.

  8. Goran Miskovic · 2 years ago

    I haven’t make any benchmarks but I would quote one of Marcus Börger articles:

    “The big difference Arrays require memory for all elements allow to access any element directly Iterators only know one element at a time only require memory for the current element forward access only Access done by method calls”

    It would be interesting to see benchmarks with ArrayObjects/ArrayIterators used.

    I needed recently to loop over large array of arrays and create in each loop new ArrayObject from current array element. Instead of having in each loop something like $foo = new ArrayObject($largeArray->current()) I’ve created empty ArrayObject $foo before entering loop and than on each iteration $foo->exchangeArray($largeArray->current()).

    I haven’t done any serious benchmarking but I quickly compared memory usage and execution time using KCashegring and results where in favor of exchangeArray() approach.

  9. a · 2 years ago

    What order am I supposed to read these comments in?

  10. Thomas Koch · 2 years ago
    1. Most of the times, maintainable code is much more important then fast code, because servers are cheaper then programmers.
    2. Without seeing your code, there’s nothing we could discuss about.
    3. Please provide the PHP version number too.

    Please do not continue this bad myth, that one should not use OO, because it would be slow.

  11. Christian Vanek · 2 years ago

    Good article! We have run into similar issues, particularly because we rely completely on OO at some points and don’t use a DB at all until data comes back for reporting. Even then, we have some studies with over 22,000 data points (dynamic fields) in a single record (crazy I know — we do all sorts of mysql acrobatics to allow it). Before we optimized we had to bump the memory limit up to 300M just to allow larger studies to run. Servers are cheap — but not that cheap.

    Anyways, Thomas, I don’t think anyone is saying that you should avoid using OO. I think the Ryan’s point is that as developers we need to be aware of resource usage — especially as you scale up.

  12. Robin · 2 years ago

    Great article, thanks. I’m just starting up on what may be a fairly intensive PHP OO application and these concerns are at the forefront of my design and planning