• 342345@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Uncached server side rendered response times in double digit milliseconds.

    Thirst thought, that sounds slow. But for the use case of delivering html over the Internet it is fast enough.

      • aksdb@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        For a bit of templating? Yes! What drives response times up is typically the database or some RPC, both of which are out of control of PHP, so I assume these were not factored in (because PHP can’t win anything there in a comparison).

        • naught@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Anything under like 100ms load is instant to the user, especially a page load. It’s a balancing act of developer experience vs performance. To split hairs over milliseconds seems inconsequential to me. I mean, PHP requires $ before variables! That’s the real controversy :p

          • aksdb@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Anything under like 100ms load is instant to the user, especially a page load.

            True, but it accumulates. Every ms I save on templating I can “waste” on I/O, DB, upstream service calls, etc.

          • aksdb@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            If you run it in old-school CGI mode, no, because each request would spawn a new process. But that’s nowhere near state-of-the-art. So typically you would still have a long-running process somewhere that could manage a connection pool. No idea if it does, though. Can’t imagine that it wouldn’t, however, since PHP would be slaughtered in benchmarks if there was no way to keep connections (or pools) open across requests.