Updated API Rate Limits

Overview

Following an internal review of the performance of the API, we are updating the API rate limits to separate limits per endpoint. 

Previously, a single API rate limit was defined as "sustained calls of over 100 requests per minute on any endpoint" and was controlled via direct disabling of the API key responsible. Following our recent announcement regarding API Rate Limit Responses, we will no longer disable API keys in this way. 

Who will be affected by this change?

This change will affect any users directly using the API. We've taken the precaution of monitoring usage across all customers over the last few months to understand if any will be affected by these changes. The handful of customers that are exceeding some of these limits have been approached directly so that they can resolve any issues before we roll out the changed limits. 

Details

We are moving from a single limit across all endpoints to separate limits by endpoint to better reflect the load placed on the system by each call. These limits have been set based on extensive performance tests and are designed to minimise the impact on API users. These changes will further improve the robustness of the system for all users. The limits are outlined in the table below and are with respect to a time period of 1 minute (60000ms):

MethodsEndpoint
Calls Limit 
POST
api/aqs/query
120
POST
api/aqs/join
120
POST
api/aqs/statistics
120
POST
api/bulk/generic
20
PUT 
POST 
DELETE
api/design
50
PUT
POST
DELETE
api/designInterface
50
*
api/file (See Note 1 below)
100
GET
api/item
300
GET
api/item-version
300
PUT
POST
DELETE
api/item
150
GET
api/item/*/graph
100
GET
api/item/*/parents
300
POST
api/item-log/item/*/reconstruct
100
GET
api/item-log/item/*
200
GET
api/item-log/design/*
200
GET
api/layer/*/*/*/*/network
2000
GET
api/layer/*/*/*/*/cluster
5000
GET
api/layer/*/*/*/*/basic
2000
*
api/route/*
50
GET
api/oauth-reply
100
POST
api/oauth-token-login
100
POST
api/sync
120
PUT
POST
DELETE
api/workflow
50
PUT
POST
DELETE
api/workflow/*/action
50
PUT
POST
DELETE
api/workflow-action-group
50
PUT
POST
DELETE
api/workflow-action-group/*/action
50
POST
api/(workflow|workflow-action-group)/*/action/parameters
10
*
api/(budget|change-component|defect|inspection|job|project|team|work-unit)
100
POST
api/extended/bulk
20

All other endpoints not specified above
1000

Note 1 - This exclude the api/file/bulk-download/{id}/file endpoint which starts a background task.

Note that for many endpoints, the call limit per minute will be increased - in some cases allowing up to 10 to 50 times more traffic. For those endpoints where the limit has been reduced to below the previous value of 100, the functions delivered by the endpoints are computationally heavier but with the expectation that these endpoints would also be used less frequently (e.g. workflows and bulk generic).

Expected Release Dates

Staging: 25th February  2022

Live: 31st March 2022

Access Control Based on Authentication Method

Overview

We are adding a new feature that will allow administrators to specify the specific method of authentication (such as Microsoft Online SSO) to use in order to access Customer Projects. 

Who does this affect?

This change will affect administrators who want to restrict their users to using a specified authentication method over the current choice of email/password or SSO options including Google and Microsoft. Note that these policies may only be configured using Alloy Forge so please contact the Support Team to change the way in which users access your project. 

Details

This change has introduced a new concept called "Customer Security Policy". In simple terms, this is an object contained in the Customer document that is meant to include information about customer choices in terms of security related "settings".

Currently the security policy only includes the accepted authentication method property. If set, a user can only create a customer session for a specific customer if that session is created through one of the accepted authentication methods. Normally a customer session is created by switching a master session to a customer session. This means the master session will need to have been created through one of the accepted authentication methods.

If the user utilises an authentication method not on the accepted list, an error message will be presented and the user returned to the logon screen to retry. 

The security policies may be set using the following Forge endpoints:

GET api/customer/{id}/security-policy - Gets the customer security policy for a specific customer

* PUT api/customer/{id}/security-policy - Edits the customer security policy for a specific customer

Expected Release Date

13th January 2022

Option for Export Geometry Projection

Overview

We are adding an new option to the data Export endpoint which allow you to specify the projection to be used for Geometry data by providing the Proj4 string to use to convert over from WGS84 (Lat-Long)

Who does this affect?

This change will affect API users of the Export endpoint but will not modify the existing behaviour if the optional setting is not provided. 

Details

The following endpoint:

POST /api/export 

now accepts an optional string proj4 that may be used to convert geometry coordinates into the spatial reference system of your choice. This mirrors the optional Proj4 setting used during import.

Without the proj4 string, geometry is exported in WGS84 (Longitude, Latitude) coordinates.

There is a database of PROJ4 strings available for searching here https://epsg.io/

For example, for the UK British National Grid system, the Proj4 string is

+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +datum=OSGB36 +units=m +no_defs

Expected Release Date

13th January 2022

Export Data to ESRI SHP Files

Overview

We are adding a new option to the Data Export endpoint which allow you to set the export format to ESRI SHP (Shape) file instead of the standard CSV export. 

Who does this affect?

This change will affect API users of the Export endpoint who require ESRI shape file output. 

Details

The following endpoint:

POST /api/export 

can now be used to specify the export type as ESRI SHP. Note that due to length constraints in the SHP format for field data, attributes values may be truncated in the exported files.

The endpoint now accepts two different request models:

ShapefileExportWebRequestModel requests made with this model will return files as SHP.

CsvExportWebRequestModel requests made with this model will return data as a CSV.

Both these models extend the ExportWebRequestModelBase model.

Note the the SHP format consists of multiple (3) files that will be returned on the Export File endpoint as a zipped bundle. 

GET /api/export/{id}/file

Expected Release Date

13th January 2022

New API Key Management

Overview

As part of ongoing improvements, we are updating the way API keys are managed in the system. This change will see the provision of an API key to every user discontinued, with future keys being generated on demand using the new mechanism described below. As part of this change, the way in which the keys are provided to the API as a token header will change to use the Bearer Authentication method.  

Who will this affect?

This change will affect all API users as any existing API key in use will be retired based on the timeline specified below, with the expectation for all users to transition to the new keys by that time.

Details

A new mechanism for API keys has been added and alongside it, a new series of endpoints. The current API key mechanism, where all users have an API key created alongside them is obsoleted and will be removed as part of Phase 2. 

Ignoring the current mechanism, new users will not have API keys created by default, but instead, will be able to create them on demand up to a maximum of 100 keys per user per customer. The new API keys will also come with a label field (to provide some description), an enabled flag (if false the API key will not be valid for usage), and an optional expireAt datetime (if an API key is expired it cannot be edited nor used to authenticate, it can only be deleted).

An API key value (or token) will never be shown after the creation of the API key, so if an issued API is lost or forgotten then the user will need to generate a new one. The database only contains a one way hashed version of the key, just like for passwords, so it is not possible for anyone to impersonate the user using the key even if they have direct access to database. 

The new endpoints are as follows:

GET api/api-key/{id} - Allows the caller to get the information of an API key by its ID

GET api/api-key - Allows the caller to list API keys by user and customer. Optionally it accepts a filter for the "label" field

PUT api/api-key/{id} - Allows the caller to edit an API key to change the label, enabled flag and expiration

POST api/api-key - Allows the caller to create an API key, this is the only endpoint that will return the actual value of an API key

DELETE api/api-key/{id} - Allows the caller to delete an API key

Important: The way you pass the API key is changing!

The new API keys will need to be passed according to the OAuth 2.0 Bearer token format (RFC 6750). That is a request header named "Authorization" will need to be included, which would look like this:

Authorization: Bearer APIKEYVALUEHERE

This change means that the API keys provided by the old and new mechanisms are not interchangeable and old keys will not be accepted via the Authorization header and new keys will need to be generated.  

FAQ

Will API keys work across Alloy regions/environments?

No, each key is specific to a user and customer project. Customers with Live and Staging environments should consider these separately as there is no link between projects across environments. 

Is there a default expiration date for each API key?

No, by default each API key will have no expiration date set and it will be the responsibility of the user to set this on creation or edit.

Will I be able to retrieve my API key at a later date following creation?

In line with best practice, the API key will only be given in the response model on creation. It will then be the creator's responsibility to safely store this key for later use. Only the associated data for the API keys (such as the label, enabled flag and expiration date) will be provided in the GET, PUT and DELETE responses.

Will I need an API key to generate an API key?

Once you've authenticated using your login credentials via the session endpoint and created a customer session, you'll be able to use the customer session token to generate and manage keys using the endpoints described above. So no, you won't be stuck in a infinite loop trying to get an API key 😜. 

Expected Release Date

13th January 2022

Phase 2: Retirement of Existing Token Method

26th January 2023

Specify an Aggregation Type for AQS Join Data

Overview

Following our previous announcement on the temporary change to the way AQS Join Query results are displayed in custom reports, we have now added an option to allow you to specify the aggregation behaviour via a setting on the table header in the report. Two options are available: TakeOne, which takes a single item from the possible results to display as an example or Count which will result the count of the linked results.

Who will this affect?

This change will affect anyone using the AQS Query Data source with join attributes within the Report Builder to build custom reports.

Details

We have now added support to set an aggregation type per data source header in custom reports.

This aggregation type can either be TakeOne or Count.

TakeOne will take the item attribute value of whatever first item the join attribute comes back with.

Count will set the total count of items returned by the join attribute and display it as "X Items".

This property is optional, if all the attributes along the join path have the max number of links set to one (max: 1) in their options, it will default to TakeOne, otherwise it defaults to Count. This is in order to replicate behaviour provided through Data Explorer. As these header properties can only be set once the attribute type is defined, it can only be defined on data source editing.  

Data Source Editing

PUT /api/custom-report/{customReportCode}/data-source/{code}

Prior to this change, editing an AQS Query data source would have been made using following the "EditDataSourceAqsQueryWebRequestModel" model.

This model has now been updated allowing you to specify an additional property called "headerSettings" which allows you to configure the headers further, as follows:

{
  "discriminator": "EditDataSourceAqsQueryWebRequestModel",
  "name": "All dogs",
  "required": false,
  "signature": "618146945b5b25015cf8d186",
  "dodiCode": "designs_dog_5f181f89f4f5bf0066f80812",
  "attributes": [],
  "joinAttributes": [],
  "headerSettings": [
    {
      "headerId": "myTakeOneHeaderId",
      "aggregationType": "TakeOne"
    },
    {
      "headerId": "myCountHeaderId",
      "aggregationType": "Count"
    }
  ]
}

The response model can still be in "EditDataSourceWebResponseModel", the change here being that the "headers" property found inside the "customReport" and "dataSources" properties, when the data source is of model "CustomReportAqsQueryDataSourceWebModel", will optionally contain an "aggregationType" property, highlighting the behaviour above.

Note that as part of this change, the default behaviour has been reverted to the Count behaviour mentioned above (which used to be default until v2.29.0 when it was temporarily change to TakeOne to avoid report failures as described here).

Example

Let's assume there is a project containing 4 job tasks. If the user would create a custom report using an AQS Query data source rooted on Projects and linking to Jobs via the Tasks Attribute (Project DS -> Tasks to Jobs DS -> Title). If a Table control was then added to the layout based on this data source, the display of the data in this table will change following this change. 

Take One

Using the TakeOne option, a single exemplar item is displayed e.g. JOB-9. Note that this is not necessarily the first item in the list and is dependant on the order returned by the system.

Count

Using the Count option, the number of linked items is displayed, for example:


Expected Release Date

13th January 2022

Description Property in Workflows

Overview

We have added a new string property to Workflows, Workflow Actions, and Workflow Action Groups. This property is intended to allow a textual description of the element to be added, to aid others in understanding the use of the element. The description is then returned by any endpoint returning the respective details.

Who will this affect?

This change will affect users who create, edit or read Workflows, Workflow Actions or Workflow Action Groups directly from the API.

Details

The following endpoints have had the property added to their respective request models:

POST api/workflow
PUT api/workflow/{code}
POST api/workflow/{code}/action
PUT api/workflow/{code}/action/{id}
PUT api/workflow-action-group/{code}
POST api/workflow-action-group
POST api/workflow-action-group/{code}/action
PUT api/workflow-action-group/{code}/action/{id}

The following response models also had a description property added to them: 

WorkflowActionWebModel 
WorkflowWebModel
WorkflowActionGroupWebModel 

In all of these instances, the description property is an optional string property with a maximum allowable length of 1024 characters.

Expected Release Date

9th December 2021

Orientation Property Added to Custom Report Flow Documents

Overview

In order to support setting the page orientation of Custom Reports Flow documents, we have added a new property to the CustomReportDocumentDefinitionFlowWebModel called orientation. The property has also been added to endpoints which create or edit the Flow documents definition.

Who will this affect?

This change affects users who are creating and editing custom report flow documents through the API.

Details

Responses that use the CustomReportDocumentDefinitionFlowWebModel will now need to include a definition for orientation

Requests to the following endpoints will now require a value for orientation:

POST api/custom-report/{customReportCode}/document-definition 
PUT api/custom-report/{customReportCode}/document-definition/{id}

The orientation property can be one of two options:

  • Portrait
  • Landscape

For example, creating a Flow document via POST /api/custom-report/{customReportCode}/document-definition

{
  "name": "ExampleDocument",
  "documentDefinitionInfo": {
    "discriminator":
        "CustomReportDocumentDefinitionFlowWebModel",
    "orientation": "Portrait",
    "controls": [...],
    "visualizations": [...]
  }
  "signature": "..."
}

Expected Release Date

9th December 2021

Change to AQS Query Results Display in Custom Reports

Overview

We are making a temporary change to the behaviour of how the results from an AQS Query Data Source are displayed in custom reports. This change will affect the display in controls when linking to multiple items to make this work consistently across all attribute types.

This change is being made to prevent report generation from failing under certain conditions described below.

Who will this affect?

This change will affect anyone using the AQS Query Data source with join attributes within the Report Builder to build custom reports.

Given the specific nature of this change, we do not expect it to widely affect users. However, if you notice that this change has had an adverse affect on the output in your reports, please do contact our support team.

 

Details

Previously, when an AQS Query resulted in multiple matched items for join attributes, this would be displayed as X Items into the resulting table cell in the report. However, since all entries in a table column must be of the same type, this would only work for String attributes (since the entry X Items is also a String) and would result in an error for other attribute types with the report failing to generate. 

From now on, resulting cells will show a single item attribute result, similar to the current behaviour of the single join result. If multiple values are matched, only the first attribute of the first item will be displayed. 

Example

Let's assume there is a project containing 4 job tasks. If the user would create a custom report using an AQS Query data source rooted on Projects and linking to Jobs via the Tasks Attribute (Project DS -> Tasks to Jobs DS -> Title). If a Table control was then added to the layout based on this data source, the display of the data in this table will change following this change. 

Before Change

Previously, the table would have shown 4 Items in the joined Tasks and Title column.


After Change

However, from now on this would display one of the job titles e.g. JOB-9.

Note

If you would like to continue using this aggregation, this can be achieved by using a Join Data source rather than an AQS Query Data source.


Expected Release Date

30th September 2021


Import Data Limits

Overview 

As part of our ongoing work to provide greater clarity around system limits we are defining the limits when importing data. By defining these limits, we will be able to provide improved error handling and feedback. 

Who will this affect?

This change will affect anyone using the Alloy imports to perform large scale imports. Note that the limits are defined with some headroom such that imports with records counts over the set limits that may have previously passed will now fail.  

Details

Data imports with no parent links configured will be limited to 1,000,000 records.

Data imports with any parent link configured will be limited to 500,000 records. This is due to the extra processing and storage overhead required to connect items as part of an import.

To import more than the above specified items in the system, it will be necessary to stage the process and execute multiple imports.

The parent links mentioned are ones that can be configured in the "Parents" section within the Gateway module:

Or for API users, by using the parents property in the ImportValidateWebRequestModel:

{
  "designCode": "designs_test",
  "collection": "Live",
  "mode": "Insert",
  "settings": {
    "attributes": [],
    "networkReferences": [],
    "parents": [
      {
        "dodiCode": "designs_parent",
        "attributeCode": "attributes_link",
        "matchHeader": "MyHeader",
        "matchAttributeCode": "attributes_random"
      }
    ]
    "discriminator": "ImportSettingsDataWebModel"
  },
  "signature": "60e623e00000000000000000"
}

Expected Release Date

30th September 2021

Show Previous EntriesShow Previous Entries