Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CreateChatCompletionStreamResponse as an Event-stream Response Type to the /Chat/completions Endpoint #311

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

paulhdk
Copy link

@paulhdk paulhdk commented Aug 14, 2024

In the /chat/completions endpoint definition under x-oaiMeta it says:

x-oaiMeta:
    name: Create chat completion
    group: chat
    returns: |
        Returns a [chat completion](/docs/api-reference/chat/object) object, or a streamed sequence of [chat completion chunk](/docs/api-reference/chat/streaming) objects if the request is streamed.

The "streamed sequence of chat completion chunk objects" refers, I believe, to the CreateChatCompletionStreamResponse type, which is defined, but never explicitly linked to the createChatCompletionRequest() function.

Since it is never defined that createChatCompletionRequest() may return a stream of chunk objects, tools such as the swift-openapi-generator package, which I have been using to implement the OpenAI OpenAPI spec, cannot generate the correct code for the spec.
For more context, see the corresponding issue in the swift-openapi-generator repo.

This PR links the CreateChatCompletionStreamResponse type to the createChatCompletion() function.

@PSchmiedmayer
Copy link

Thank you @paulhdk for creating the PR; it would be amazing to see this merged to allow generators to pick up on this information and correctly parse steaming requests.

paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Sep 13, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants