Hi Decile team,
Searching and updating records in our Fundraising YC II pipeline has become really slow over the last while — to the point where it's interfering with day-to-day work. I dug into it with browser DevTools and want to share what I found so your engineers can act on it.

What's happening
  • Searching by name in a pipeline column takes ~9 seconds to settle after a single keystroke. The page feels frozen during that time.
  • Changing a prospect's stage can take anywhere from ~2 seconds to ~12 seconds for the click to land, plus another fan-out of background requests on top of that. We measured a single stage move that took 11.7 seconds server-side.

Measurement #1 — Search ("brian bell")
A single keystroke in the Name search filter triggers ~40 HTTP requests:
  • 1 × PUT /pipelines/{id}/update_user_display (1.4 s, fired on focus)
  • 1–2 × GET /pipelines/{id}/headline_parts (the second is a duplicate that fires ~6 s later)
  • 36 × GET /pipelines/{id}/group_parts?name=<stage> — one request per pipeline column
  • 1 × GET /pipelines/{id}/group_counts
MetricValueTotal requests fired by one search | 40
Wall-clock from keyup to last response | ~9.0 seconds
Per-stage group_parts server time | min 246 ms / median 436 ms / max 648 ms
Payload size of an empty column response | 576 bytes (still ~400 ms server time)
A second keystroke fires the entire 36-request wave again with no cancellation of the first wave, so they pile up.

Measurement #2 — Stage change (update_prospect_by_cell)
We did a clean round-trip on a single prospect: moved them out of their current stage, then back. Both calls hit PUT /pipelines/{id}/update_prospect_by_cell (with prospect_id, cell_id for the stage column, and value = stage_id).
Moveupdate_prospect_by_cell server timeSide-effect requestsWall-clock until UI settledMove 1 (out of current stage) | 11,678 ms | 36 × group_parts + 2 × headline_parts (fan-out) | ~12 seconds
Move 2 (back into original stage) | 2,366 ms | 2 × headline_parts only | ~9.1 seconds
Two things to flag:
  1. Move #1 took nearly 12 seconds for the API call alone. Even allowing for some warm-cache effect on the second call, ~2.4 s is still slow for what should be a single-row UPDATE. We've seen worse — sometimes >12 s — making it feel like the click didn't register and tempting a second click (which would presumably stack another expensive call).
  2. Stage changes also fan out the 36-column refresh. Move #1 fired 36 group_parts requests in parallel in addition to the update itself, exactly the same fan-out as a search. So every stage change pays the full pipeline-refresh tax.

Why I think this is the root cause
Our pipeline has 36 stage columns. Both search and stage-update latency appear to scale linearly with the number of columns — pipelines with fewer stages would feel snappy; ours feels broken. Most VC pipelines we'd want to use this on are going to have a lot of stages, so this likely affects more than just us.
The architectural pattern is one Turbo Stream fetch per column, fanned out client-side. Combined with Chrome's 6-connection-per-host cap, those 36 requests serialize and the user waits.

Suggested fixes (rough order of impact)
  1. Replace the per-column fan-out with a single endpoint (e.g. GET /pipelines/{id}/refresh?q=...) that returns a batched Turbo Stream update — one round trip instead of 36.
  2. Investigate why update_prospect_by_cell itself takes seconds. That's a single-row write, it shouldn't cost 2–12 s. Likely candidates: synchronous side-effects on stage change (recompute headline counts, run pipeline actions, fire webhooks/Datadog spans, etc.) — move them to a background job.
  3. Don't fan out the 36-column refresh on stage change. After a stage change, only the source and destination columns need to be re-rendered.
  4. Short-circuit empty columns server-side so a stage with zero matches doesn't cost a full round trip.
  5. Cancel in-flight searches on a new keystroke (AbortController).
  6. Deduplicate the headline_parts listener — two identical calls fire per action, one of them ~6 s late.
  7. Audit the focus-time update_user_display PUT — 1.4 s on focus is itself a UX issue.
  8. Index prospects.name with trigram search (pg_trgm if you're on Postgres) so per-column queries drop from ~400 ms to <50 ms.

What you can pull on your side
I can see Datadog RUM is loaded on the page — the slow searches and updates should be visible in your RUM data under the /pipelines/7Na3Qz8D/group_parts, /headline_parts, and /update_prospect_by_cell endpoints, filtered to our org. Happy to provide a HAR file, screenshare, or any other detail that's useful.
Thanks — would love to get this on the roadmap. The product is great when it's responsive, and this is the main thing slowing us down right now.
Brian