Contact us

Please get in touch and one of the team will be in contact very soon.

Thank you! We’ll get back to you soon

We have received your message and will get back to you as soon as possible.

Oops! Something went wrong.

More details

Whatever your query, get in touch. Our AI engines remove friction, reduce toil, and create intsant knowledge so you can do a better job faster.
Icons related to AI

Frequently asked questions

How are your AI engines set up (Azure Infrastructure): 

We'll send you a draft email to send to Microsoft allowing us to create an Azure Subscription that you control securely within your Azure Tenant. Once that's done, we'll request access rights, and build your AI engine. We'll load up your information, securely in your Tenant, train the engine, and then you're ready to go.

Does my data remain within my existing Azure tenant:

Yes, we set up a seperate Azure Subscription in your existing Azure tenant. The Client’s AI engine will be built in this Azure Subscription. This allows the client’s data to remain within their Azure tenant. Engine only have access (granted by the client) to this specific Azure Subscription – not to the full Azure Tenant.

In what location does any processing of data (data indexing / prompts / responses) occur:

All data processing is done it the Azure UK South location.

How do you connect with Chat GPT?

We use a Private API to call the Chat GPT Large Language Model. This private API is not accessible to external parties or the general public. Only authorised individuals within your organisation can interact with the private API.

What steps have we taken to ensure that processing of data is secure?

Indexing: Your data is ingested and indexed. This step involves creating searchable indexes from your data. This indexing process is handled within your Azure tenant.

Querying: When you query the indexed data (i.e. when you prompt the AI engine), a search engine processes that query and retrieves the relevant documents or data segments. This happen within your Azure environment, within your tenant.

Integration with Azure OpenAI Service: Once the relevant data is retrieved from the indexing & querying processes, we use Azure OpenAI Services to further analyse or extract insights from the retrieved data. The specific data sent to the  OpenAI model for analysis typically includes only the query and the text or data segments relevant to that query, not the entire dataset.

Data Sent to OpenAI: The data sent to the OpenAI service for processing would be the text or information segments that you are querying or analysing. This means that only the specific parts of the data that are directly relevant to the queries made are sent to the OpenAI model.

How does  Engine ensure that data sent to Open AI is secure:

Data Security: Azure ensures that data sent to and from Azure OpenAI Service is encrypted in transit.

Compliance: Azure services are designed to comply with various industry standards and regulations, ensuring that your data is handled securely.

Are there  restrictions on the size of prompts and responses in our AI solutions:

For GPT 3.5 Turbo, the maximum token limit is about 4,096 tokens per request.
For the standard GPT-4 model, the maximum token limit is around 8,192 tokens per request. These token limits include both input tokens (prompt) and output tokens (response) e.g. when using Chat GPT 4 a user’s prompt and response combined cannot be more than  8192 tokens. Tokens can represent individual  characters, parts of words, or entire words, depending on the complexity and  length of the text. For example: Small words (‘dog’ ‘run’ ‘a’) tend to be one token.    Punctuation marks are typically treated as a separate token (‘Hello!’ would be 2 tokens).

Explore our collection of 200+ Premium Webflow Templates