Handling API Requests for Data Stores

Before sending requests to Open Cloud APIs for standard data stores and ordered data stores, you need to understand how to handle them properly. For information on the usage of the API, see the Usage Guide.


Like all Open Cloud APIs, data store endpoints require all requests to include the x-api-key header, which contains an API key with enough permissions for the request. This requires you to apply the key to the experience and the data store, and the endpoint operation is permitted. If the key is invalid, 403 Unauthorized is returned. For more information on API keys, see Managing API Keys.


All endpoints have two types of universe level throttling: request-per-minute throttling and throughput throttling. With every experience, request-per-minute throttling allows you to send a certain number of requests per minute, and throughput throttling allows you to send a certain amount of data per minute, regardless of the number of API keys.

Unlike the Lua API, these limits currently do not scale based on user counts. Exceeding these limits causes the endpoint to return 429 Too Many Requests.

Standard Data Stores Throttling Limits

Request TypeMethodThrottle Limits
  • Set Entry
  • Increment Entry
  • Delete Entry
  • 10 MB/min/universe write throughput
  • 300 reqs/min/universe
  • List Data Stores
  • List Entries
  • Get Entry
  • List Entry Versions
  • Get Entry Version
  • 20 MB/min/universe write throughput
  • 300 reqs/min/universe

Ordered Data Stores Throttling Limits

Request TypeMethodThrottle Limits
  • Create
  • Increment
  • Update
  • Delete
  • 300 reqs/min/universe
  • List
  • Get
  • 300 reqs/min/universe

Input Validation

Before sending your request, make sure to validate endpoint parameters on formatting requirements and constraints based on the following table. Individual endpoints can have additional requirements beyond these. If a parameter doesn't satisfy the following restrictions, the endpoint returns a 400 Bad Request.

  • The unique identifier of your experience. See Universe ID.
  • Length must be 50 bytes or less.
  • Can't be null or empty.
  • The scope of a data store. See Scopes.
  • Length must be 50 bytes or less.
  • Length must be 50 bytes or less.
  • Can't be null or empty.
  • Serialized by JSON object.
  • Length must be less than 300 bytes.
  • Serialized by JSON array of 0-4 numbers.
  • No more than 4 user IDs.
  • An indicator of more data available in the requested result set. See Cursors.

Universe ID

The Universe ID is the unique identifier of the experience in which you want to access your data stores. The value of an experience's Universe ID is the value of its DataModel.GameId, not the same as the Starting Place ID, which identifies the starting place of an experience rather than the entire experience.

You can obtain the Universe ID of an experience with the following steps:

  1. Navigate to the Creator Dashboard.

  2. Find the experience with data stores that you want to access.

  3. Click the button on the target experience's thumbnail to display a list of options, then select Copy Universe ID.

    Copy Universe ID option from Creator Dashboard


You can organize your data stores by setting a unique string as a scope that specifies a subfolder for the entry. Once you set a scope, it automatically prepends to all keys in all operations done on the data store. Scopes are optional and by default as global for standard data stores but required for ordered data stores.

The scope categorizes your data with a string and a separator with "/", such as:


All data store entry operation methods have a Scope parameter for when you need to access the entries stored under a non-default scope. For example, you might have a 1234 key under the default global scope, and the same key under special scope. You can access the former without using the scope parameter, but to access the latter, you have to specify the scope parameter as special in Get Entry or Increment Entry API calls.

Additionally, if you want to enumerate all the keys stored in a data store that has one or multiple non-default scopes, you can set the AllScopes parameter in List Entries method to be true, in which case the call returns a tuple with key string and scope. In the previous example, the List Entries would return both (1234, global), and (1234, special) in the response.

You can't pass Scope and AllScopes parameters on the same request, otherwise the call returns an error. Leveraging the helping functions from the Open Cloud APIs for data stores module, the following code illustrates how you can read every key in a data store with a custom scope:

List Keys for Different Scopes

# Set up
import tutorialFunctions
DatastoresApi = tutorialFunctions.DataStores()
datastoreName = "PlayerInventory"
# List keys for global scope
specialScopeKeys = DatastoresApi.list_entries(datastoreName, scope = "global", allScopes = False)
# List keys for special scope
specialScopeKeys = DatastoresApi.list_entries(datastoreName, scope = "special", allScopes = False)
# List keys for allScope set to true
specialScopeKeys = DatastoresApi.list_entries(datastoreName, allScopes = True)

Keys with the corresponding scope are returned in the response:

Example Responses for Different Scopes

// Response for global scope
{ "keys": [{ "scope": "global", "key": "User_2" }], "nextPageCursor": "" }
// Response for special scope
// Response for AllScopes


Content-MD5 is the base-64 encoded MD5 checksum of content. It's an optional request header for the Set Entry endpoint that checks the data integrity and detects potential issues.

You can use the language of your choice to calculate the value of the content-md5 header. The following example uses Python. The hashlib.md5() and base64.b64encode() functions are available in Python standard libraries (2.7, 3+).

Generating Content-MD5

# With prompts
$ python -c "import base64, hashlib; print('content-md5: ' + str(base64.b64encode(hashlib.md5(bytes(input('content: '), encoding='utf8')).digest()), encoding='utf8'))"
content: 750
content-md5: sTf90fedVsft8zZf6nUg8g==
# Using just stdin and stdout
$ echo "750" | python -c "import base64, hashlib; print(str(base64.b64encode(hashlib.md5(bytes(input(), encoding='utf8')).digest()), encoding='utf8'))"

If you run into issues generating a valid content-md5 value, you might need to encode your request body in UTF-8 binary before computing the checksum.


Endpoints that return lists of data might also return a nextPageCursor string. This indicates that there is more data available in the requested result set. To receive it, provide this string in the cursor query parameter on a subsequent request. If the cursor parameter is provided but invalid, the endpoint returns 400 Bad Request.

The format of cursor strings is not defined. You should not interpret or parse them as they may change at any time.


When sending requests to the List method for ordered data stores, you can add an optional filter query parameter to return entries with values in a specified range.

The filter parameter supports one logic operator, &&, and two comparison operators, <= for setting the maximum value and >= for setting the minimum value. If you want to set a range with both a max and min value, add && between the two sequences.

For example, to return entries with values that are less than or equal to 10, you need to input entry <= 10 as the filter value. To return entries with values between 10 and 50, input entry <= 50 && entry >= 10.

The following examples are incorrect filter values that can fail your requests:

  • entry <= 10 - no whitespace between each part of the sequence.
  • 10 <= entry - entry and the comparison value are on the wrong side.
  • entry <= 10 && entry <= 50 - && can only be used to specify a range with both two comparison operators for the min and max value.

Allow Missing Flags

When sending requests to the Update method to update an existing ordered data store entry, you can add an optional allow_missing flag to allow the creation of an entry even if the entry doesn't exist.

When you set the allow_missing flag to True:

  • If the entry doesn't exist, the response returns a new entry.

  • If the entry exists but the content matches the existing value of the entry, the existing entry remains unchanged.

  • If the entry exists and the content doesn't match the existing value of the entry, the response returns the entry with the updated new content value.