Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection pool optimization: reduce connection maintenance loops complexity #929

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

MarkusSintonen
Copy link
Contributor

@MarkusSintonen MarkusSintonen commented Jun 13, 2024

Summary

This is a small low hanging fruit for optimizing and simplifying the connection maintenance loops in the connection pool. This work is based on suggestion by @T-256 here (thanks!). Previously performance of httpcore degraded as connection count increased in the pool. As the maintenance loops had to do more and more work iterating and reiterating the connections.

This optimization brings the performance of httpcore to same level as urllib3 with sync-usage.

The benchmarks below include the socket polling fix. (The socket polling problem makes request processing latency so highly varying that it overshadows everything.)

Previously with sync (with optimized socket polling):
new_expiry_sync

PR with sync:
sync_new

As it can be seen the performance gets to exact same level as urllib3. There is almost no overhead from httpcore related request processing anymore.

Checklist

  • I understand that this PR may be closed in case there was no previous discussion. (This doesn't apply to typos!)
  • I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
  • I've updated the documentation accordingly.

@MarkusSintonen MarkusSintonen changed the title Connection pool optimization: reduce connection maintenance loop complexity Connection pool optimization: reduce connection maintenance loops complexity Jun 13, 2024
@MarkusSintonen MarkusSintonen force-pushed the optimize-connection-pool-loops branch 2 times, most recently from da9636b to b6fd02c Compare June 15, 2024 14:56
@MarkLux
Copy link

MarkLux commented Aug 19, 2024

wonder if there are any further progress for this PR, our team are waiting for this fix @MarkusSintonen @T-256

@MarkusSintonen
Copy link
Contributor Author

@MarkLux I don't know where the author has gone :/

Copy link

@Mrreadiness Mrreadiness left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MarkusSintonen! Thank you for your pr, I hope to see it merged soon.
Recently I've faced a performance issue in the pool, in case of a growing number of requests awaiting execution. It looks like your pr will partially solve such issues, I've marked a couple of places with potential improvements.
Also, have you thought about changing the List of _requests to something more optimal for queue tasks, for example, deque? It could reduce CPU usage for remove operations, especially in cases of huge queue.

# log: "closing idle connection"
self._connections.remove(connection)
closing_connections.append(connection)

# Assign queued requests to connections.
queued_requests = [request for request in self._requests if request.is_queued()]
for pool_request in queued_requests:
for pool_request in list(self._requests):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need a copy of the list here?

idling_connection = next(
(c for c in self._connections if c.is_idle()), None
)
if idling_connection is not None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we continue iterating over self._requests if there is no available or idling connection? Can we break in such cases?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants