Bring Your Collections into Booksnake
Booksnake is a tool for converting digitized collection materials into life-size virtual objects for embodied exploration in physical space.This page is designed to help GLAM professionals assess whether your institution’s digitized collections are compatible with Booksnake, and understand what that compatibility depends on.
Contact Sean Fraga, project director, with questions or to discuss your collection’s specific situation. Email Sean at booksnakeapp@gmail.com.
Is Your Collection Compatible? Start Here.
| Question | Answer | Result |
|---|---|---|
| 1. Does your institution provide access to digitized collections through IIIF? | No | Your collection is not currently compatible with Booksnake. See Section 1. |
| Yes | Continue to Question 2. | |
| 2. Does your institution make high-resolution images of collection materials publicly accessible? | No | Your collection is not currently compatible with Booksnake. See Section 2. |
| Yes | Continue to Question 3. | |
| 3. How does your institution provide dimensional information in item records? | Physical dimensions | Full compatibility. See Option A. |
| Digitization resolution | Full compatibility. See Option B. | |
| Consistent digitization pipeline (e.g., FADGI standards) | Good compatibility with additional configuration. See Option C. | |
| Digitization target included in item images | Potential compatibility via Autosizer. See Autosizer. | |
| Textual physical descriptions only | Potential compatibility via Autosizer. See Autosizer. | |
| No dimensional information | Limited compatibility. Contact us to discuss your collection. |
Section 1: IIIF Access
Booksnake uses the International Image Interoperability Framework (IIIF) to access and download item metadata and digital images. Collections that are not accessible through IIIF are not currently compatible with Booksnake.
If your institution hasn’t built its own IIIF infrastructure, it may use a digital asset management (DAM) platform that already supports IIIF. See the IIIF Consortium’s list of IIIF-Compliant Vendors and Software Providers for more information.
API version support. Booksnake currently supports version 2.1 of both the IIIF Image API and IIIF Presentation API. We are actively building support for version 3.0 of the Image and Presentation APIs and will maintain backward compatibility with v2.1.
URL structure. Booksnake works by converting an item’s catalog page URL into the corresponding IIIF manifest URL. For this to work reliably, there must be a consistent, predictable relationship between the two — for example, a stable unique identifier present in both. This must hold for both the desktop and mobile versions of your institution’s online catalog.
Metadata completeness. Booksnake displays all metadata provided in an item’s IIIF manifest, presenting it alongside the object in the app’s item view. Users are best served by IIIF manifests that contain full catalog records. Sparse or incomplete IIIF manifests will result in reduced context for users interacting with your institution’s materials.
Section 2. Image Resolution and Delivery
Booksnake requires sufficiently high-resolution images to produce clear, realistic virtual objects. Images that are too low in resolution will produce virtual objects with visible pixelation, particularly when users move close to an object to examine fine details.
Image resolution. As a general guideline, Booksnake produces clear virtual objects when item images are served at a minimum of 300 dpi. Higher resolutions produce better results, particularly for items with fine detail or small text. This threshold varies depending on item type and size. Contact us to discuss your collection’s specifics.
Image delivery. Booksnake constructs virtual objects from image files downloaded to the user’s device. Booksnake must be able to access and download these images using information contained in the item’s IIIF manifest — either directly via image URLs in the manifest, or by using a stable unique identifier in the manifest to construct an image request URL.
Access restrictions. Booksnake can only work with images that are publicly accessible without authentication. Materials behind access controls, paywalls, or institutional login requirements are not currently compatible.
Section 3. Dimensional Metadata
Booksnake requires dimensional metadata to create life-size virtual objects. Without accurate dimensional information, virtual objects will be rendered too large or too small.
To create a life-size virtual object, Booksnake needs to know an item’s physical dimensions — its height and width in the real world. The digitization process connects three pieces of information: Physical Dimensions x Digitization Resolution = Pixel Dimensions. For example, an item 10 inches wide, digitized at 300 pixels per inch, produces an image 3,000 pixels wide. Knowing any two of these values allows you to calculate the third.
Booksnake can work with dimensional metadata in several forms, described below. If you’re uncertain which applies to your collection, the most useful thing to check is your digitization documentation and your IIIF manifests.
If your IIIF manifests include physical dimensions as computer-readable values (Option A)
Some institutions record an item’s physical dimensions directly in its IIIF manifest, with separate fields for height and width expressed as numbers rather than as text strings. Booksnake can ingest this data directly to create a life-size virtual object.
Item records should include both height and width where possible, though Booksnake can work with a single measurement. Dimensions may be expressed in inches, centimeters, or other standard units.
This approach produces the most accurate virtual objects — ones that exactly match the physical dimensions in the item record — and requires no additional configuration.
If your IIIF manifests include digitization resolution and pixel dimensions as computer-readable values (Option B)
Some institutions record digitization resolution and pixel dimensions in IIIF manifests rather than physical dimensions. Booksnake can use these values to calculate physical dimensions directly: dividing pixel dimensions by digitization resolution yields physical dimensions in the original unit.
For this method to work reliably, there must be a consistent, documented relationship between the pixel dimensions of reference images (produced during digitization) and the pixel dimensions of images served through IIIF. Consistent rescaling (for example, all images reduced to 50% before serving) is compatible with this method, as long as the scaling factor is known and applied uniformly. Arbitrary rescaling, such as capping images at a maximum pixel dimension regardless of the original image size, will break the calculation and is not compatible.
This method produces highly accurate virtual objects and works particularly well for compound objects such as books and newspapers, where each page image can be sized individually.
If your institution follows a consistent digitization pipeline but does not record resolution in item records (Option C)
Some institutions follow consistent digitization standards (such as the FADGI standards or Metamorfoze guidelines) across collections or item types, but do not record digitization resolution in individual item records or IIIF manifests. In these cases, Booksnake can use a pre-configured reference table documenting the applicable resolution for each item type, then calculate physical dimensions by dividing pixel dimensions by the appropriate resolution.
The same consistency requirement applies here as in Option B: there must be a known, uniform relationship between reference image pixel dimensions and the pixel dimensions served through IIIF.
This method can produce accurate virtual objects, but its reliability depends on the level of consistency in your digitization pipeline. We will work with your team to document the relevant parameters. Contact us to discuss your collection’s specifics.
If your item records contain textual physical descriptions, or your item images contain digitization targets: Autosizer
Autosizer is a machine learning pipeline under active development by the Booksnake project team. We’re developing Autosizer in collaboration with the Huntington Library, using their collections as a primary training and testing environment.
Autosizer uses two complementary approaches to extract dimensional information. Both Autosizer approaches are designed to work with metadata and image features that are standard across cultural heritage institutions, making the methods developed with the Huntington extensible to other collections.
The first approach uses a natural language processing (NLP) model to parse textual physical descriptions. Many institutions record physical dimensions as part of a human-readable text string in a physical description field — for example, “Print; image 60.4 × 55 cm (23¾ × 21⅝ in.); overall 93.2 × 61 cm.” These descriptions are informative but are not computer-readable in their raw form. The Autosizer NLP model parses these textual physical descriptions and extracts numeric dimensions, handling variation in phrasing, units, and field structure. This model is currently in active testing and is producing reliable results across a range of description formats.
The second approach uses a computer vision (CV) model to identify digitization targets (the color calibration cards or rulers that digitization teams routinely place alongside items during imaging), then uses their known dimensions to calculate an item’s physical size directly from the image. This model is currently being retrained on a broad range of target types, and will undergo validation and comparison testing before deployment.
If your collections fall into either category, please contact us to discuss compatibility and potential collaboration. We are actively seeking partner institutions to test and refine both approaches across a wider range of collection types and metadata formats.
If your institution does not record dimensional metadata
If your item records contain no physical dimensions, digitization resolution, or physical descriptions of any kind, Booksnake cannot currently create accurate life-size virtual objects from your collections. Contact us to discuss your situation—in some cases, there may be alternative approaches worth exploring.
Published 2023-12-05. Updated 2026-03-17.