We are excited to announce our latest product: OpenAI JavaScript API Client
It is a downloadable JavaScript package that lets you connect your applications to the OpenAI API within minutes.
Leveraging GPT-3, Codex or DALL·E-based Artificial Intelligence (AI) in your application is now as simple as a few lines of code.
The package handles all the low-level details for you, so you can focus on delivering value in your application.
Some Examples
Text Completion using GPT-3
const tectalicOpenai = require('@tectalic/openai').default;
tectalicOpenai(process.env.OPENAI_API_KEY)
.completions.create({
model: 'text-davinci-002',
prompt: 'Will using a third party package save time?'
})
.then((response) => {
console.log(response.data.choices[0].text.trim());
});
// Using a third party package can save time because you don't have to write the code yourself.
Code Completion Using Codex
const tectalicOpenai = require('@tectalic/openai').default;
tectalicOpenai(process.env.OPENAI_API_KEY)
.completions.create({
model: 'code-davinci-002',
prompt: '/* Create a JavaScript variable that saves the current date and time */',
max_tokens: 256,
stop: ';'
})
.then((response) => {
console.log(response.data.choices[0].text.trim());
});
// var currentDate = new Date();
Image Generation Using DALL·E
const tectalicOpenai = require('@tectalic/openai').default;
tectalicOpenai(process.env.OPENAI_API_KEY)
.imagesGenerations.create({
prompt: 'A cute baby sea otter wearing a hat',
size: '256x256',
n: 3
})
.then((response) => {
response.data.data.forEach((image) => console.log(image.url));
});
// https://oaidalleapiprodscus.blob.core.windows.net/private/...
// https://oaidalleapiprodscus.blob.core.windows.net/private/...
// https://oaidalleapiprodscus.blob.core.windows.net/private/...
A Demo
Here is a short video showing what it is like building a request to the OpenAI completions endpoint from a Typescript Node.JS application:
What makes this package different?
Typescript Declarations for all Requests and Responses
Typescript declarations are included for all API requests and responses, with complete details of the OpenAI API request and response structures at your fingertips.
This has several advantages, including:
- No need to read complex API documentation because your IDE displays all the details for each request and response, including all the available properties and corresponding types.
- If using this package in a Typescript project, the
tsc
compiler will report typos or errors.
IDE Autocompletion
All API endpoints, requests and responses are fully documented within the package, allowing advanced IDEs (such as Visual Studio Code, Atom or WebStorm) to understand all the endpoints, methods, parameters and properties.
All 20 Endpoints Supported
This package includes support for all of OpenAI’s publicly available endpoints, including the commonly used completions and moderation endpoints, as well as the image generation and fine tunes endpoint (if you wish to train your own unique AI model).
Please see the documentation for a full list of supported API methods.
Asynchronous Integrations with Promises
All API endpoints are accessible using with one or two lines of code, and return a Promise that resolves with the API response.
Compatible with Node.JS, Typescript or JavaScript Applications
The package has been built using Typescript, so it should integrate seamlessly into your Typescript applications.
However, it is also compatible with many other JavaScript applications, and can be used with Node.JS or plain old JavaScript.
Fully Tested
The package includes detailed unit and functional integration tests, giving us (and you) the confidence that things will work as expected.
Fully Supported
We want your integration experience to be seamless and trouble-free, which is why our Australian-based team of developers are here to help you.
Getting Started
You can find the @tectalic/openai
package available on npm and GitHub.
We also have comprehensive documentation available.
Questions or Feedback?
We welcome your feedback, and we’re excited to see what you do with this package.
What do you like about it? What do you think could be improved?
You can reach out to us here.