Memory Stores

*Bu içerik, tercih ettiğin dilde çok yakında mevcut olacak.

MemoryStoreService is a high throughput and low latency data service that provides fast in-memory data storage accessible from all servers in a live session. Memory Stores are suitable for frequent and ephemeral data that change rapidly and don't need to be durable, because they are faster to access and vanish when reaching the maximum lifetime. For data that needs to persist across sessions, use Data Stores.

Data Structures

Instead of directly accessing raw data, memory stores have three primitive data structures shared across servers for quick processing: sorted map, queue, and hash map. Each data structure is a good fit for certain use cases:

  • Skill-based matchmaking - Save user information, such as skill level, in a shared queue among servers, and use lobby servers to run matchmaking periodically.
  • Cross-server trading and auctioning - Enable universal trading between different servers, where users can bid on items with real-time changing prices, with a sorted map of key-value pairs.
  • Global leaderboards - Store and update user rankings on a shared leaderboard inside a sorted map.
  • Shared inventories - Save inventory items and statistics in a shared hash map, where users can utilize inventory items concurrently with one another.
  • Cache for Persistent Data - Sync and copy your persistent data in a data store to a memory store hash map that can act as a cache and improve your experience's performance.

In general, if you need to access data based on a specific key, use a hash map. If you need that data to be ordered, use a sorted map. If you need to process your data in a specific order, use a queue.

Limits and Quotas

To maintain the scalability and system performance, memory stores have data usage quotas for the memory size, API requests, and the data structure size.

Memory stores have an eviction policy based on expiration time, also known as time to live (TTL). Items are evicted after they expire, and memory quota is freed up for new entries. When you hit the memory limit, all subsequent write requests fail until items expire or you manually delete them.

Memory Size Quota

The memory quota limits the total amount of memory that an experience can consume. It's not a fixed value. Instead, it changes over time depending on the number of users in the experience according to the following formula: 64KB + 1KB * [number of users]. The quota applies on the experience level instead of the server level.

When users join the experience, the additional memory quota is available immediately. When users leave the experience, the quota doesn't reduce immediately. There's a traceback period of eight days before the quota reevaluates to a lower value.

After your experience hits the memory size quota, any API requests that increase the memory size always fail. Requests that decrease or don't change the memory size still succeed.

With the observability dashboard, you can view the memory size quota of your experience in real-time using the Memory Usage chart.

API Request Limits

For API request limits, there's a request unit quota that applies for all MemoryStoreService API calls. The quota is 1000 + 100 * [number of concurrent users] request units per minute.

Most API calls only consume one request unit, with a few exceptions:

  • MemoryStoreSortedMap:GetRangeAsync()

    Consumes units based on the number of returned items. For example, if this method returns 10 items, the call counts as 10 request units. If it returns an empty response, it counts as one request unit.

  • MemoryStoreQueue:ReadAsync()

    Consumes units based on the number of returned items, just like MemoryStoreSortedMap:GetRangeAsync(), but consumes an additional unit every two seconds while reading. Specify the maximum read time with the waitTimeout parameter.

  • MemoryStoreHashMap:UpdateAsync()

    Consumes a minimum of two units.

  • MemoryStoreHashMap:ListItemsAsync()

    Consumes [number of partitions scanned] + [items returned] units.

The requests quota is also applied on the experience level instead of the server level. This provides flexibility to allocate the requests among servers as long as the total request rate does not exceed the quota. If you exceed the quota, you receive an error response when the service throttles your requests.

With the observability feature available, you can view the request unit quota of your experience in real-time.

Data Structure Size Limits

For a single sorted map or queue, the following size and item count limits apply:

  • Maximum number of items: 1,000,000
  • Maximum total size (including keys for sorted map): 100 MB

Per-Partition Limits

See Per-Partition Limits.

Best Practices

To keep your memory usage pattern optimal and avoid hitting the limits, follow these best practices:

  • Remove processed items. Consistently cleaning up read items using MemoryStoreQueue:RemoveAsync() method for queues and MemoryStoreSortedMap:RemoveAsync() for sorted maps can free up memory and keep the data structure up-to-date.

  • Set the expiration time to the smallest time frame possible when adding data. Though the default expiration time is 45 days for both MemoryStoreQueue:AddAsync() and MemoryStoreSortedMap:SetAsync(), setting the shortest possible time can automatically clean up old data to prevent them from filling up your memory usage quota.

    • Don't store a large amount of data with a long expiration, as it risks exceeding your memory quota and potentially causing issues that can break your entire experience.
    • Always either explicitly delete unneeded items or set a short item expiration.
    • Generally, you should use explicit deletion for releasing memory and item expiration as a safety mechanism to prevent unused items from occupying memory for an extended period of time.
  • Only keep necessary values in memory.

    For example, for an auction house experience, you only need to maintain the highest bid. You can use MemoryStoreQueue:UpdateAsync() on one key to keep the highest bid rather than keeping all bids in your data structure.

  • Use exponential backoff to help stay below API request limits.

    For example, if you receive a DataUpdateConflict, you might retry after two seconds, then four, eight, etc. rather than constantly sending requests to MemoryStoreService to get the correct response.

  • Split giant data structures into multiple smaller ones by sharding.

    It's often easier to manage data in smaller structures rather than storing everything in one large data structure. This approach can also help avoid usage and rate limits. For example, if you have a sorted map that uses prefixes for its keys, consider separating each prefix into its own sorted map. For an especially popular experience, you might even separate users into multiple maps based on the last digits of their user IDs.

  • Compress stored values.

    For example, consider using the LZW algorithm to reduce the stored value size.

Observability

The Observability Dashboard provides insights and analytics for monitoring and troubleshooting your memory store usage. With real-time updating charts on different aspects of your memory usage and API requests, you can track the memory usage pattern of your experience, view the current allocated quotas, monitor the API status, and identify potential issues for performance optimization.

The following table lists and describes all status codes of API responses available on the Observability Dashboard's Request Count by Status and Requests by API x Status charts. For more information on how to resolve these errors, see Troubleshooting. For the specific quota or limit that an error relates to, see Limits and Quotas.

Status CodeDescription
SuccessSuccess.
DataStructureMemoryOverLimitExceeds data structure level memory size limit (100MB).
DataUpdateConflictConflict due to concurrent update.
AccessDeniedUnauthorized to access experience data. This request doesn't consume request units or use quota.
InternalErrorInternal error.
InvalidRequestThe request doesn't have required information or has malformed information.
DataStructureItemsOverLimitExceeds data structure level item count limit (1M).
NoItemFoundNo item found in MemoryStoreQueue:ReadAsync() or MemoryStoreSortedMap:UpdateAsync(). ReadAsync() polls every 2 seconds and returns this status code until it finds items in the queue.
DataStructureRequestsOverLimitExceeds data structure level request unit limit (100,000 request units per minute).
PartitionRequestsOverLimitExceeds partition request unit limit.
TotalRequestsOverLimitExceeds universe-level request unit limit.
TotalMemoryOverLimitExceeds universe-level memory quota.
ItemValueSizeTooLargeValue size exceeds limit (32KB).

The following table lists states codes from client side, which are currently not available on the Observability Dashboard.

Status CodeDescription
InternalErrorInternal Error.
UnpublishedPlaceYou must publish this place to use MemoryStoreService.
InvalidClientAccessMemoryStoreService must be called from server.
InvalidExpirationTimeThe field 'expiration' time must be between 0 and 3,888,000.
InvalidRequestUnable to convert value to json.
InvalidRequestUnable to convert sortKey to a valid number or string.
TransformCallbackFailedFailed to invoke transformation callback function.
RequestThrottledRecent MemoryStores requests hit one or more limits.
UpdateConflictExceeded max number of retries.

Troubleshooting

The following table lists and describes the recommended solution for each response status code:

ErrorTroubleshooting options
DataStructureRequestsOverLimit / PartitionRequestsOverLimit
  • Add a local cache by saving information to another variable and rechecking after a certain time interval, such as 30 seconds.
  • Use the Request Count by Status chart to verify that you are receiving more Success responses than NoItemFounds. Limit the amount of times you hit MemoryStoreService with a failed request.
  • Implement a short delay between requests.
  • Follow the best practices, including:
    • Sharding your data structures if you receive a significant amount of DataStructureRequestsOverLimit/PartitionRequestsOverLimit responses.
    • Implement an exponential backoff for finding a reasonable rate of requests to send.
TotalRequestsOverLimit
DataStructureItemsOverLimit
DataStructureMemoryOverLimit
TotalMemoryOverLimit
DataUpdateConflict
  • Implement a short delay between requests to avoid multiple requests updating the same key at the same time.
  • For sorted maps, use the callback function on the MemoryStoreSortedMap:UpdateAsync() method to abort a request after a certain number of attempts, as the following code sample shows:
  • Example of Aborting Request

    local MemoryStoreService = game:GetService("MemoryStoreService")
    local map = MemoryStoreService:GetSortedMap("AuctionItems")
    function placeBid(itemKey, bidAmount)
    map:UpdateAsync(itemKey, function(item)
    item = item or { highestBid = 0 }
    if item.highestBid < bidAmount then
    item.highestBid = bidAmount
    return item
    end
    print("item is "..item.highestBid)
    return nil
    end, 1000)
    end
    placeBid("MyItem", 50)
    placeBid("MyItem", 40)
    print("done")
  • Investigate to see if you're calling MemoryStoreService efficiently to avoid conflicts. Ideally, you shouldn't over-send requests.
  • Consistently remove items once they are read using the MemoryStoreQueue:RemoveAsync() method for queues and MemoryStoreSortedMap:RemoveAsync() for sorted maps.
Internal Error
InvalidRequest
  • Make sure that you include correct and valid parameters in your request. Examples of invalid parameters include:
    • An empty string
    • A string that exceeds the length limit
ItemValueSizeTooLarge
  • Shard or split the item value into multiple keys.
    • To organize grouped keys, sort them alphabetically by adding a prefix to the key.
  • Encoding or compressing stored values.

Testing and Debugging in Studio

The data in MemoryStoreService is isolated between Studio and production, so changing the data in Studio doesn't affect production behavior. This means that your API calls from Studio don't access production data, allowing you to safely test memory stores and new features before going to production.

Studio testing has the same limits and quotas as production. For quotas calculated based on the number of users, the resulting quota can be very small since you are the only user for Studio testing. When testing from Studio, you might also notice slightly higher latency and elevated error rates compared to usage in production due to some additional checks that are performed to verify access and permissions.

For information on how to debug a memory store on live experiences or when testing in studio, use Developer Console.