Skip to main content

Files

This item is only intended to be used by the module's authors. Private

Handles the storage and retrieval of potentially large data blobs within Roblox DataStores, working around the size limitations imposed by DataStore SetAsync/GetAsync calls.

Core Problem: Roblox DataStores have a maximum size limit per key (currently 4MB). Data that exceeds this limit cannot be stored directly.

Solution: Sharding & Compression

  1. Sharding: If the JSON-encoded data exceeds the configured maxShardSize, it is split into multiple smaller chunks (shards). Metadata about these shards (a unique shard ID and the total shard count) is stored in the primary DataStore entry for the original key, while the actual shard data is stored under separate keys derived from the shard ID.
  2. Compression: Before sharding, the JSON-encoded data is converted to binary using buffer.fromstring. This binary representation is then JSON-encoded again before being split into shards. Roblox automatically compresses buffers when encoding them using JSONEncode. This helps reduce the number of shards required, minimizing DataStore requests.

Shard Key Naming: Shard data is stored using keys formatted as {shardId}-{shardIndex}, where shardId is a unique GUID generated for the file and shardIndex is the 1-based index of the shard.

Types

WriteParams

interface WriteParams {
storeDataStore--

The DataStore instance to write to.

dataany--

The Luau data to be stored. Must be JSON-encodable.

maxShardSizenumber--

The maximum size (in bytes) allowed for a single shard. Data exceeding this size after initial JSON encoding will trigger the sharding process.

keystring--

The primary key under which the file metadata (or the full data if not sharded) will be conceptually associated. This key is not directly used for storing shards.

userIds{number}?--

An optional array of UserIDs for DataStore tagging.

}

Parameters required for the write function.

WriteError

interface WriteError {
errorstring--

A string describing the error.

fileFile--

The file metadata that was being processed when the error occurred. This is used for cleanup operations if shards were partially written.

}

Structure representing an error encountered during the write operation.

ReadParams

interface ReadParams {
storeDataStore--

The DataStore instance to read from.

fileFile--

The File object obtained from a previous write operation or retrieved from the primary DataStore key. This object determines whether to read directly or reconstruct from shards.

}

Parameters required for the read function.

Functions

splitString

Files.splitString(
strstring,--

The string to be split.

chunkSizenumber--

The size of each chunk.

) → {string}--

A table containing the split chunks.

Splits a string into chunks of a specified size. Used for sharding large data blobs into smaller pieces.

isLargeFile

Files.isLargeFile(
fileFile--

The file object to check.

) → boolean--

True if the file is sharded, false otherwise.

Checks if a file object represents a sharded file (i.e., data stored across multiple keys).

write

Files.write(
paramsWriteParams--

The parameters for the write operation.

) → Promise<File>--

A Promise that resolves with a File object representing the stored data (either directly containing the data or shard metadata).

Writes data to the DataStore, automatically handling sharding and compression if necessary.

If the JSON-encoded data is smaller than maxShardSize, it's stored directly within the returned File object (in the data field).

If the data is larger, it's compressed, sharded, and stored across multiple DataStore keys. The returned File object will contain shard (the unique ID for the shards) and count (the number of shards) instead of the data field.

Errors

TypeDescription
WriteErrorRejects with a `WriteError` if any shard fails to write.
stringPropagates errors from `DataStore:SetAsync` via `dataStoreRetry`.

read

Files.read(
paramsReadParams--

The parameters for the read operation.

) → Promise<any>--

A Promise that resolves with the original data.

Reads data from the DataStore, automatically handling reconstruction from shards if necessary.

If the provided file object contains the data field directly, it returns that data. If the file object contains shard and count fields, it reads all corresponding shards from the DataStore, concatenates them, decompresses the result, and returns the original data.

Errors

TypeDescription
stringRejects with an error message string if any shard is missing or if decoding/decompression fails. Propagates errors from `DataStore:GetAsync` via `dataStoreRetry`.
Show raw api
{
    "functions": [
        {
            "name": "splitString",
            "desc": "Splits a string into chunks of a specified size.\nUsed for sharding large data blobs into smaller pieces.",
            "params": [
                {
                    "name": "str",
                    "desc": "The string to be split.",
                    "lua_type": "string"
                },
                {
                    "name": "chunkSize",
                    "desc": "The size of each chunk.",
                    "lua_type": "number"
                }
            ],
            "returns": [
                {
                    "desc": "A table containing the split chunks.",
                    "lua_type": "{ string }"
                }
            ],
            "function_type": "static",
            "source": {
                "line": 44,
                "path": "src/Files.luau"
            }
        },
        {
            "name": "isLargeFile",
            "desc": "Checks if a file object represents a sharded file (i.e., data stored across multiple keys).",
            "params": [
                {
                    "name": "file",
                    "desc": "The file object to check.",
                    "lua_type": "File"
                }
            ],
            "returns": [
                {
                    "desc": "True if the file is sharded, false otherwise.",
                    "lua_type": "boolean"
                }
            ],
            "function_type": "static",
            "source": {
                "line": 62,
                "path": "src/Files.luau"
            }
        },
        {
            "name": "write",
            "desc": "Writes data to the DataStore, automatically handling sharding and compression if necessary.\n\nIf the JSON-encoded data is smaller than `maxShardSize`, it's stored directly within the\nreturned [File] object (in the `data` field).\n\nIf the data is larger, it's compressed, sharded, and stored across multiple DataStore keys.\nThe returned [File] object will contain `shard` (the unique ID for the shards) and\n`count` (the number of shards) instead of the `data` field.",
            "params": [
                {
                    "name": "params",
                    "desc": "The parameters for the write operation.",
                    "lua_type": "WriteParams"
                }
            ],
            "returns": [
                {
                    "desc": "A Promise that resolves with a [File] object representing the stored data (either directly containing the data or shard metadata).",
                    "lua_type": "Promise<File>"
                }
            ],
            "function_type": "static",
            "errors": [
                {
                    "lua_type": "WriteError",
                    "desc": "Rejects with a `WriteError` if any shard fails to write."
                },
                {
                    "lua_type": "string",
                    "desc": "Propagates errors from `DataStore:SetAsync` via `dataStoreRetry`."
                }
            ],
            "source": {
                "line": 114,
                "path": "src/Files.luau"
            }
        },
        {
            "name": "read",
            "desc": "Reads data from the DataStore, automatically handling reconstruction from shards if necessary.\n\nIf the provided `file` object contains the `data` field directly, it returns that data.\nIf the `file` object contains `shard` and `count` fields, it reads all corresponding shards\nfrom the DataStore, concatenates them, decompresses the result, and returns the original data.",
            "params": [
                {
                    "name": "params",
                    "desc": "The parameters for the read operation.",
                    "lua_type": "ReadParams"
                }
            ],
            "returns": [
                {
                    "desc": "A Promise that resolves with the original data.",
                    "lua_type": "Promise<any>"
                }
            ],
            "function_type": "static",
            "errors": [
                {
                    "lua_type": "string",
                    "desc": "Rejects with an error message string if any shard is missing or if decoding/decompression fails. Propagates errors from `DataStore:GetAsync` via `dataStoreRetry`."
                }
            ],
            "source": {
                "line": 182,
                "path": "src/Files.luau"
            }
        }
    ],
    "properties": [],
    "types": [
        {
            "name": "WriteParams",
            "desc": "Parameters required for the `write` function.",
            "fields": [
                {
                    "name": "store",
                    "lua_type": "DataStore",
                    "desc": "The DataStore instance to write to."
                },
                {
                    "name": "data",
                    "lua_type": "any",
                    "desc": "The Luau data to be stored. Must be JSON-encodable."
                },
                {
                    "name": "maxShardSize",
                    "lua_type": "number",
                    "desc": "The maximum size (in bytes) allowed for a single shard. Data exceeding this size after initial JSON encoding will trigger the sharding process."
                },
                {
                    "name": "key",
                    "lua_type": "string",
                    "desc": "The primary key under which the file metadata (or the full data if not sharded) will be conceptually associated. This key is *not* directly used for storing shards."
                },
                {
                    "name": "userIds",
                    "lua_type": "{ number }?",
                    "desc": "An optional array of UserIDs for DataStore tagging."
                }
            ],
            "source": {
                "line": 77,
                "path": "src/Files.luau"
            }
        },
        {
            "name": "WriteError",
            "desc": "Structure representing an error encountered during the `write` operation.",
            "fields": [
                {
                    "name": "error",
                    "lua_type": "string",
                    "desc": "A string describing the error."
                },
                {
                    "name": "file",
                    "lua_type": "File",
                    "desc": "The file metadata that was being processed when the error occurred. This is used for cleanup operations if shards were partially written."
                }
            ],
            "source": {
                "line": 93,
                "path": "src/Files.luau"
            }
        },
        {
            "name": "ReadParams",
            "desc": "Parameters required for the `read` function.",
            "fields": [
                {
                    "name": "store",
                    "lua_type": "DataStore",
                    "desc": "The DataStore instance to read from."
                },
                {
                    "name": "file",
                    "lua_type": "File",
                    "desc": "The [File] object obtained from a previous `write` operation or retrieved from the primary DataStore key. This object determines whether to read directly or reconstruct from shards."
                }
            ],
            "source": {
                "line": 165,
                "path": "src/Files.luau"
            }
        }
    ],
    "name": "Files",
    "desc": "Handles the storage and retrieval of potentially large data blobs within Roblox DataStores,\nworking around the size limitations imposed by DataStore `SetAsync`/`GetAsync` calls.\n\n**Core Problem:** Roblox DataStores have a maximum size limit per key (currently 4MB).\nData that exceeds this limit cannot be stored directly.\n\n**Solution: Sharding & Compression**\n1.  **Sharding:** If the JSON-encoded data exceeds the configured `maxShardSize`, it is\n    split into multiple smaller chunks (shards). Metadata about these shards (a unique\n    shard ID and the total shard count) is stored in the primary DataStore entry for the\n    original key, while the actual shard data is stored under separate keys derived from\n    the shard ID.\n2.  **Compression:** Before sharding, the JSON-encoded data is converted to binary using\n    `buffer.fromstring`. This binary representation is then JSON-encoded *again* before\n    being split into shards. Roblox automatically compresses buffers when encoding them\n\tusing `JSONEncode`. This helps reduce the number of shards required, minimizing\n\tDataStore requests.\n\n**Shard Key Naming:** Shard data is stored using keys formatted as `{shardId}-{shardIndex}`,\nwhere `shardId` is a unique GUID generated for the file and `shardIndex` is the 1-based\nindex of the shard.",
    "private": true,
    "source": {
        "line": 27,
        "path": "src/Files.luau"
    }
}