Script Profiler

Script Profiler is a tool within the Developer Console that allows you to record profiling sessions of all running scripts and view their CPU time costs with custom recording and display settings. It can record all types of function calls, including Luau functions, method calls, and property accesses. This tool is helpful for identifying scripts that take up the most CPU resources and slow down the performance.

Recording Profiling Sessions

Before recording, you need to select the recording environment from:

You can also set the following recording options:

Frequency1000 times per second (1 KHz)
10,000 times per second (10 KHz)
1 KHzThe 10 KHz frequency has higher precision, as Script Profiler might not pick up API calls that execute more frequently than your selected frequency, but it also has a higher performance cost.
Session Length1-minute
ManualThe manual option requires you to stop recording manually.
Live Polling BehaviorOn
OffThis behavior polls and refreshes profiling data each second during a profiling session.

To record a new profiling session:

  1. Expand the tools dropdown to select ScriptProfiler.

    Dropdown menu of all Developer Console tools with the ScriptProfiler option highlighted for selection.
  2. Expand the client-server dropdown to select Client or Server.

    Dropdown menu with Client and Server options for selection.
  3. (Optional) Check the Live checkbox to enable the live polling behavior.

  4. (Optional) Select Freq and Time to choose the recording frequency and session time length if you don't want to use the default values.

  5. Click Start to begin the profiling session. If you set a duration, Script Profiler displays a countdown timer with the remaining time in the session.

  6. Click Stop or wait until the recording finishes to display the profiling data.

Reading Profiling Data

After a session stops, Script Profiler generates a table showing how much time each function call costs in CPU time. The table sorts function calls from the most-time-spent to least-time-spent, and allows you to search for specific functions by their name. It provides the following two views:

  • Callgraph (Default): Categorizes and displays function calls into a tree structure based on frame tasks. This view displays each task category as nodes under the same root and allows you to expand them to view functions. You can also hover over any node in the tree to view file and line information. For example, Stepped/CameraInput/<anonymous> might reveal Players.[LocalPlayer].PlayerScripts.PlayerModule.CameraModule.CameraInput:125. Example callgraph view of a profiling session.
  • Functions: Lists all functions without categorizing them by tasks. Example functions view of a profiling session.

You can also select from the following display options to tailor your debugging needs:

UnitMilliseconds (ms)
Percentage (%)
msDisplays time spent on each API call in milliseconds or percentages of the total recording session.
OffCalculates the average time spent on each API call by the selected value. If you select an option that is longer than the session length, Script profiler extrapolates the session length to calculate the average. For example, you can select the 5-minute option for a 1-minute session to see the expected average value if you run the code for 5 minutes.

Exporting Profiling Data

Script Profiler allows you to export recorded profiling data as a JSON file. To export recorded data after a profiling session:

  1. On the Script Profiler window, click Export.

  2. On the export window, select the profiling session that you want to export. Rename the default file name if you want to set a custom name.

  3. Click Export to save the JSON file.

    Example export window.

The exported JSON file includes the following fields:

  • Version: The version number.
  • SessionStartTime: A timestamp in milliseconds that records the session start time.
  • SessionEndTime: A timestamp in milliseconds that records the session end time.
  • Categories: An array of frame task categories recorded in the profiling session. Each entry includes:
    • Name: The name of each frame task category.
    • NodeId: The unique identifier of a task category (node). It's a 1-based index into the Nodes array. For example, you can look up the node with the NodeId of 123 by retrieving the 123rd element in Nodes.
  • Nodes: An array of nodes recorded in the profiling session. Each entry includes:
    • TotalDuration: The amount of time that the node costs in CPU time in microseconds.
    • FunctionIds: An array of unique identifiers of functions.
    • NodeIds: An array of Node IDs.
  • Functions: An array of functions recorded in the profiling session.
    • TotalDuration: The amount of time that the function costs in CPU time in microseconds.
    • Name: The name of the function, if available.
    • Source: The source of the function, if available.
    • Line: The line number of the function, if available.
    • Flags: A bit field that indicates any specific function execution environment. Currently can have the following values:
      • 0: The 0th bit represents IsNative for execution under Native CodeGen.
      • 1: The 1 th bit represents IsPlugin for execution as part of a plugin.
Example Exported Profiling Data

{"Name":"Parallel Luau","NodeId":4},