The Drexel Digital Museum project (DDM) was initiated in 1998 when the director was asked to create a database for the Drexel University Historic Costume Collection (DUHCC), an assemblage of 12000+ objects of historic fashion apparel, accessories, and textiles (Fig. 1). The Collection was housed in two separate locations and documented in classic card catalogue style (typed and handwritten 3”x5” paper cards), with no full time, well trained curator. An opportunity to create an online archive, dynamic to a database, and designed to attract audience to the physical collection and funding for a fulltime curator was recognized.
Survey of members of the Costume Society of America, a prime audience of scholars, students, and designers, revealed a need for high resolution images with rich details, multiple views of the object, and multiple ways to search the database.
A future conscious method for documentation, preservation and dissemination of a cultural heritage object requires ongoing investigations of standards and procedures. Preservation requires not only carefully curated metadata for optimal sharing across platforms via that most democratic means of dissemination, the world wide web, but also as virtual representations of the objects that engage the viewer in museum exhibition, as concerns over object degradation limit physical display.
Adam Lowe, of imaging company Factum Arte, asserts “facsimiles, and especially those relying on complex (digital) techniques, are the most fruitful way to explore the original and even to help re-define what originality actually is.” (Sattin, Lowe 2014). Europeana’s white paper, The Problem of the Yellow Milkmaid, presented sound economic reasons for museums to use the highest quality images of their holdings on the world wide web, along with open metadata, when poor quality reproductions of Johannes Vermeer’s ‘The Milkmaid’ devalued the reproduction postcards of the artwork in the Rijksmuseum (Verwayen, Arnoldus, Kaufman 2011). A quality image online can be the best advertisement to draw audience to the physical collection.
Our early experiments with QuickTime VR, debuted in 2000, were among the first such representations in the museum community to provide quality image panoramas of historic fashion. Included are selections from the DUHCC and couture on loan from private collectors. The historic garment was placed on a revolving Kaidan rig and a high-resolution image was captured every 20 degrees of rotation, resulting in 18 views of the garment. The views were then composited into a 3D panorama, with embedded hot spots of details, able to be manipulated by the viewer.
Those standards we used for the images were of a high enough quality and persistence to be repurposed as MP4 files for the web when Apple pulled support of QuickTime, making the file format unable to be opened without downloading the QuickTime plugin. Our current image parameters, 126.5 megapixel with a pixel arrays of 8831×14326, are well above those of the USA’s National Archive Records Administration (NARA) of 10-16 megapixel and a pixel array of 4800×3700.
After participating in a workshop given by Carl Lagoze and Herbert von de Sompel on the Open Archives Initiative Metadata Harvesting Protocol at a Museum Computer Network conference in 2001, we decided to make our metadata OAI-PMH compliant. The OAI protocol provided us with a low entry and well-defined interoperability framework, applicable across domains to prepare our metadata to be open for harvesting across domains. Lagoze describes a two – party model in which data providers and service providers use HTTP encoding and XML schema for protocol conformance.
Extensibility is achieved by providing multi-item level item and collections level metadata. An item may be searched by descriptive fields like object title, object creator, or by the collection to which the object belongs. The tags for the OAI Metadata Harvesting Protocol are divided into three sections: protocol support; format – specific metadata; and community – specific record data. The sets or collection definitions are defined by the communities of the data providers and are not defined by the OAI protocol (Lagoze, von deSompel, 2001). Using the ‘Technical Umbrella’ of OAI to expose our metadata in a uniform fashion we customized the OAI record fields with elements associated with historic costume.
Among the numerous facets for interoperability, including uniform naming, metadata formats, document models, and access protocols, within the historic costume community naming convention was the most challenging. Although the costume collections developed their vocabularies from assorted sources such as Pickens and the vocabularies of the Getty Research Institute Getty, among others, terms were mostly added to a drop list in the category field based on the holdings of a particular collection and on an as needed basis. There was much disagreement on the correct terminology for description of a particular object (Pickens 1999; Getty 2016).
We chose to develop our vocabulary through a hierarchy and vocabulary building tool based on the ICOM Vocabulary of Basic Terms for Cataloguing Costume. The appeal was that it described costume by where it is worn on the body and avoided the problem of the multilingualism of costume terms (Buck 1978). At the granular level of the hierarchy we added historic terms, derived from most sources used by the community and contemporary fashion terms, with the rule that the term needed to have appeared in at least three fashion publications such as Women’s Wear Daily, Vogue Magazine. The tool, dynamic to the DDM database and open to the costume community, allowed the visitor to add new terms and synonyms with the same rule for identifying sources.
We use our tool when entering a new fashion object record in our backend database. We are very interested in continuing research on controlled vocabulary for costume and considering investigations such as the heritage object terminology project under the oversight of MoMu, the fashion museum of Antwerp, Belgium, with ten participating museums in the Dutch speaking area of Western Europe. The project outlines the benefits of linking metadata to shared controlled vocabularies for collections of fashion heritage by tagging the controlled text fields of their databases with an identifier corresponding to a concept.
These concepts are published on the web as the Visuele Thesaurus voor Mode and Kostuums and the Europeana Fashion Thesaurus in a controlled and freely shared vocabulary that is searchable by multiple and multilingual terms united by a uniform identifier. Using a controlled, machine readable vocabulary is a path to the enhanced search provided by linked open data, where holdings from both individual collections and aggregators are made accessible by heterogeneous sets of metadata (Wildenborg 2016, pp. 1-12).
In 2008 we migrated our original database to a Collective Access (CA) site to take advantage of CA’s annotation, discovery, and reporting tools. CA is a highly flexible and easily customized open-source collections management and presentation software used by museums, non-profits, and private collectors in five continents (Fig. 2).
In 2008 our original mission, raising the profile of the DUHCC and attracting funding to hire a fulltime curator, was accomplished. Since then, custodianship of the Collection has been brought to best practices standards, a staff of 3 hired, holdings increased to 14,000+ objects and a dedicated gallery built in the URBN Center of the Westphal College. In 2014, the DUHCC was renamed the Robert and Penny Fox Historic Costume Collection. The Collection now has its own website, separate from the DDM website. This has freed us to concentrate our research on evolving best practices in image quality and data structure for production, conservation, and dissemination of new media for exhibition of historic fashion.
Since the withdrawal of support for QuickTime rendered the linked QuickTime movies in the CA database disabled, our option with those files was to repurpose the highest resolution files, being the original Camera RAW files saved as TIFF, as MP4. As we began to repurpose these files, we looked for a new process to create the panoramas which would create higher fidelity to the object by producing higher resolution imaging; increase sustainability through a plugin free environment; and provide navigation for users in pursuit of a fashion experience and knowledge by preparing our metadata for harvesting by trusted aggregators.
ObjectVR and HTML5
Our research uses an array of imaging technology to create ultrahigh resolution ObjectVRs; and repurposes the ObjectVRs as web ready HTML5, a developing mark-up language standard for structuring and presenting multi-media and graphic elements online. The garment is selected and dressed by the curator and centered on a rotating Kaidan rig as previously. We have increased the number of views from 18 to 20 per rotation. The single image per view approach of our early QuickTime movies has been replaced with a process that captures multiple image tiles per view with the use of a GigaPan robotic head attached to the camera. The GigaPan head is set, depending on the width of the garment, to capture 3 or 4 columns of 12 rows of images for each view, producing 36 to 48 Camera RAW images per view. The Camera RAW images (cr2) are 48 bit RGB, 15.7 megapixel: 4896×3264 pixels. This process is repeated for each view of the garment producing an image data set of 720 to 960 images per object (Fig. 3).
The exported RAW camera image is set at the optimum pixel size in TIFF file format, 230.7 megapixel, 32 bit RGB, 11190 x 20620. When the TIFF files are loaded into PTGui Pro, the software aligns the images automatically using the EXIF (Exchangeable Image File) data from the camera. The alignment process generates control points, which are matching points on two overlapping images. Most of the automatic stitching process results in a seamless stitch. However, on occasion, especially when there is no pattern recognition between images such as in our photopaper background, it is necessary to manually create control points. Once the software has aligned all images and enough control points have been determined, each view of the panorama is created as a single image, 316.5 megapixel, 32 bit RGB, 15345 x 20620 (Fig. 4).
Garden Gnome Object2VR software is then used to create a 360º interactive object from the stitched images that allows users examine it in virtual 3D by rotating and zooming in on the image. The software will also generate HTML5 files for the 3D object. The ObjectVRs can be displayed at up to three times life size, rotated in 360 degree panorama, zoomed in to rich detail, and made compatible for the internet via HTML5. In 2018 the Fox Collection was recipient of the entire personal archive of fashion design by James Galanos, renown American fashion designer. For the exhibition debut of the archive, we created ObjectVR for 3 of the gowns included in the exhibition. These ObjectVR were exhibited on a 70” monitor, close to life size, along with the physical gowns (Fig. 5).
The low lighting of the exhibition is necessary to conserve the color and luster of the fabrics in the pieces. In the digital image capture process, we use stronger lighting, applied with protection of the object foremost. The three-dimensional objects often are made from translucent and/or shear textiles and are best illuminated with soft light from many directions, achieved by using large soft boxes and shoot-through umbrellas to illuminate the background. Four strobe lights are synched to the shutter to prevent harsh shadows that would result from single-strobe, focused light. Adjusting the height of the strobes to coincide with the bust of the subject and height of the camera results in a uniform, well-lit image.
This enables the viewer to see discrete details of embellishment and construction not evident in the low lighting of the exhibition. We avoid dramatic lighting of any specific details, allowing the viewer to decide which parts of the object they want to explore. This interactivity creates a participatory user experience, which responds to users’ various learning styles, interests, and knowledge, effecting the emotional and experiential qualities of viewers’ interaction with the fashion artifacts (Fry, Holland 2013, pp. 54-58).
A current project is to prepare the assets the Drexel Digital Museum (DDM) to share metadata and preview images of objects with PA Digital, data aggregators for the Digital Public Library of America (DPLA). Through a MODS mapping collaboration with members of the DU Libraries group in 2017-18, we mapped metadata from the customized fields in the DDM record to the Metadata Object Description Schema (MODS) maintained by the Library of Congress. MODS is a robust metadata schema which provides a high degree of granularity and extensibility and can be used to describe a wide variety of physical and digital objects. MODS also supports the use of embedded, persistent uniform resource identifiers (URIs) for entities such as names and topics, which will facilitate exposure of our assets via linked open data.
We are currently storing our data sets in Microsoft Teams™, coupled with the cloud storage system, SharePoint™. In our pilot test of Teams™ we uploaded all files associated with the development of an ObjectVR movie. These 38,999 individual files include all image files created, software and hardware specifications, workflow, data standards descriptions and a ReadMe file to navigate the files. This is all the data needed to repeat creation of an ultrahigh resolution ObjectVR of a cultural heritage object. Examples of the output at each phase of development illustrate the process. Upload, via HTTP to the SharePoint document folder, took 38.9 hours over a span of 16 days. We are in the process of developing a scripted solution to allow for continuous FTP uploading of data sets for long term storage. We have been working with the Drexel Libraries Group to test various management systems so our data can be stored in Drexel’s iDEA repository for digital resources produced and collected by the Drexel community. However, at this time the size of our compound images are too large for upload to and be managed by this resource.
Under advisement of IEEE Fellow David Taubman, University of New South Wales, we are investigating the fidelity of the JPEG2000 compression techniques on our image files. Dr. Taubman is the author, with Michael Marcellin, of the JPEG2000: image compression fundamentals, standards and practice. When these files are complete, we plan to test the image quality with the Federal Agencies Digital Guidelines Initiative (FADGI) against the parameters of their Digital Image Conformance Evaluation (DICE) process monitoring for performance and quality.
We are now working with the Fox Collection to identify fashion which is culturally significant, but too fragile for mounting and display in a traditional museum exhibition, to create ObjectVR to be exhibited in their stead. One of the gowns planned for the Galanos exhibit was recognized as such. After we made an ObjectVR of it, it was placed in archival storage. The only means of public access are now as an ObjectVR, online as an HTML5 file, or as a set of archival still images.
Advanced technologies can be employed to free historic fashion design from static presentation and even create a customized avatar mannequin to represent the spirit of the designer and inclusive examples of body type (Debo 2018, p. 79). Our current set of historic fashion objects for the ObjectVR process are two gowns by American fashion designer and Hollywood costumer Gilbert Adrian, both created in 1931. One was worn by Joan Crawford in the movie This Modern Age, and another by Greta Garbo in the movie Inspiration. The ObjectVR is completed for the Garbo gown. We are also realizing the gown in Clo 3D virtual fashion design software with plans to introduce motion to the gown via an animated avatar. Experiments in creating a customized avatar for the project have begun. We plan to present this work in 2021-22.
Thank you to the Drexel University Research Office, the Barra Foundation, the William S. Dietrich Foundation and Merle Raab, in memory of her husband Max Raab, for support for our research, and to all the Westphal students, faculty and administration who have worked on and supported the project.
Buck, A. (1978) Vocabulary of Basic Terms for Cataloguing Costume. London: ICOM International Committee for the Museums and Collections of Costume, Collation: 18. [http://terminology.collectionstrust.org.uk/ICOM-costume/].
Debo, K. (2018) Fashion Curation at MOMU: Digital Challenges. In A.M. Vänskä, H. Clark (eds.), Fashion Curating, p. 79. London: Bloomsbury Academic.
Fry, E.B., J. Holland (2013) Remix: Design, Media, and Shaping Experiences. Exhibitionist (Fall), pp. 54-58. [https://www.name-aam.org/s/12-EXH-f13-Remix_Holland.pdf].
The Getty Research Institute. The Getty Arts & Architecture Thesaurus [Available at: https://www.getty.edu/research/tools/vocabularies/aat/ (First accessed: 15 February 2001)].
Picken, M.B. (1999) A Dictionary of Costume and Fashion: Historic and Modern (Dover Fashion and Costumes). Mineola, New York: Dover.
Puglia, S. (January 1998) U.S. National Archives and Records Administration – Electronic Access Project Scanning and File Format Matrix [https://www.archives.gov/files/preservation/technical/guidelines-matrix.pdf].
Sattin, A. (2015) Meet the master of reproduction. Christies Magazine 9 [online: christies.com/features/Master-of-reproduction-Adam-Lowe-and-Factum-Arte-6776-1.aspx].
Verwayen, H. et al. (2011) The Problem of the Yellow Milkmaid A Business Model Perspective on Open Metadata. Europeana [online: pro.europeana.eu/files/Europeana_Professional/Publications/Whitepaper_2-The_Yellow_Milkmaid].
Wildenborg, Y. (2016) Fashion Terminology Today. Museums and Cultural Landscapes: Proceedings of the ICOM Costume Committee Annual Meeting, Milan, 3-7 July 2016 [http://costume.mini.icom.museum/publications-2/publications/proceedings-of-the-icom-costume-committee-annual-meeting-in-milan-2016/ (Accessed: 5 October 2018)].