The residency period of the American Archive of Public Broadcasting (AAPB) National Digital Stewardship Residency (NDSR) project has now ended, but we’re very proud to launch the final project created by our AAPB NDSR residents: The American Archive of Public Broadcasting Wiki, a technical preservation resource guide for public media organizations.
Selena Chau, Eddy Colloton, Adam Lott, Kate McManus, Lorena Ramírez-López, and Andrew Weaver have highlighted their collaboration and shared their resources, workflows, and documents used for managing audiovisual assets in all their possible formats and environments. The resulting Wiki encompasses everything from the first stages of the planning process to exit strategies from a storage or database solution.
AAPB staff and the residents hope that this Wiki will be an evolving resource. Editing capabilities will be locked on the Wiki for one week following launch, to allow time for the creation of a web archive of the resource in its original form that the residents may use in their portfolios; after this period, we will open up account creation to the audiovisual archiving and public broadcasting communities. We welcome your participation and contributions!
I am Chris Pierce, the Cataloger/Metadata Specialist for the American Archive of Public Broadcasting and the National Educational Television (NET) Collection Catalog project at the Library of Congress. The NET Collection Catalog Project is a collaboration between WGBH and Library of Congress and funded by the Council on Library and Information Resources (CLIR). The NET project involves the creation of a national catalog of records documenting the existence and robust description of titles distributed by NET, public media’s first national network and its earliest and among its most at-risk content.
In addition to cataloging moving image material distributed by NET during the mid to late fifties to early seventies, I am also working on a feasibility report on the implementation of linked data for the NET catalog.
Linked data? Huh?
What is linked data? The Wikipedia definition is “a method of publishing structured data so that it can be interlinked.” To put it simply, linked data is data that can be linked to other data, very much like how browsers manage hyperlinks.
Why would we want to implement linked data? There are several reasons:
AAPB/NET metadata contains valuable and largely undiscovered relationships that, when reused by others on the internet, can enhance the information already online.
It would open AAPB/NET metadata to web applications and making the metadata more discoverable and shareable on the web
It would contribute to the sustainability of metadata creation for future cataloging at the AAPB with metadata that is more deeply connected to external metadata, which could then be reused for description of AAPB material
These links are structured through relationships expressed as triples. In the image above, these triples are represented in graph form, but they can also be serialized in machine readable code. In both the serialization and the graph, these triples are logical statements:
This person [has]realName Stephen King
This person hasTwitter @StephenKing
@StephenKing hasContent [pictures of his dog Molly aka Thing of Evil]
A triple is simply a relationship between a subject and an object communicated through a predicate:
The data model that supports the exchange of data structured in this way (as a web of interlinked nodes connected through relationships expressed as triples) is the Resource Description Framework (RDF). RDF can be semantically structured through specifications that define what types of data are being modelled. For instance, the RDF schema (RDFs) is a data modelling vocabulary that can be used to define classes and possible relationships between classes. BIBFRAME is another vocabulary that is being developed by the Library of Congress to represent library bibliographic metadata in RDF. Another example is EBUCORE, a vocabulary designed by the European Broadcasting Union to support linked data in various stages of the life cycle of broadcasting material, including production, business, and archives. Vocabularies such as these are central to having every object, subject, and predicate defined and expressed as Uniform Resource Identifiers (URIs) rather than literal string values (strings that are not actionable through links), and they expand upon the types of things that can be described as linked data (at various levels of granularity).
Use HTTP URIs so that people can look up those names.
When someone looks up a URI, provide useful information, using the standards (RDF)
Include links to other URIs, so that they can discover more things.
The NET project
The feasibility report on which my colleagues at the Library of Congress and I are working will focus on records generated through the NET catalog project (where I spend the majority of my day cataloging). We catalog these records in our content management system, MAVIS. MAVIS outputs the data to MAVISXML, which is a hierarchically structured format for representing metadata. We are looking at ways to transform MAVISXML to PBCORE (the XML schema in use by AAPB) and then to RDF linked data. We are examining existing technologies, vocabularies, and workflows, and identifying other problems we need to solve. The results of this research will be a benefit not only to the AAPB, but also to other cultural heritage institutions and the public broadcasting community taking efforts to implement linked data. I am currently on the “literature review” stage of the linked data research. Look forward to future posts about our process!
This post was written by Chris Pierce, AAPB and NET Cataloger/Metadata Specialist.
In our last blog post (click for link) on managing the PBS NewsHour Digitization Project, I briefly discussed WGBH’s digital preservation and ingest workflows. Though many of our procedures follow standard practices common to archival work, I thought it would be worthwhile to cover them more in-depth for those who might be interested. We at WGBH are responsible for describing, providing access to, and digitally preserving the proxy files for all of our projects. The Library of Congress preserves the masters. In this post I cover how we preserve and prepare to provide access to proxy files.
Before a file is digitized, we ingest the item-level tape inventory generated during the project planning stages into our Archival Management System (AMS – see link for the Github). The inventory is a CSV that we normalized to our standards, upload, and then map to PBCore in MINT, or “Metadata Interoperability Services,” an open-source web-based plugin designed for metadata mapping and aggregation. The AMS ingests the data and creates new PBCore records, which are stored as individual elements in tables in the AMS. The AMS generates a unique ID (GUID) for each asset. We then export the metadata, provide it to the digitization vendor, and use the GUID identifiers to track records throughout the project workflow.
For the NewsHour project, George Blood L.P. receives the inventory metadata and the physical tapes to digitize to our specifications. For every GUID, George Blood creates a MP4 proxy for access, a JPEG2000 MXF preservation master, sidecar MD5 checksums for both video files, and a QCTools report XML for the master. George Blood names each file after the corresponding GUID and organizes the files into an individual folder for each GUID. During the digitization process, they record digitization event metadata in a PREMIS spreadsheets. Those sheets are regularly automatically harvested by the AMS, which inserts the metadata into the corresponding catalog records. With each delivery batch George Blood also provides MediaInfo XML saved in BagIt containers for every GUID, and a text inventory of the delivery’s assets and corresponding MD5 checksums. The MediaInfo bags are uploaded via FTP to the AMS, which harvests technical metadata from them and creates PBCore instantiation metadata records for the proxies and masters. WGBH receives the digitized files on LTO 6 tapes, and the Library of Congress receives theirs on rotating large capacity external hard drives.
For those who are not familiar with the tools I just mentioned, I will briefly describe them. A checksum is a computer generated cryptographic hash. There are different types of hashes, but we use MD5, as do many other archives. The computer analyzes a file with the MD5 algorithm and delivers a 32 character code. If a file does not change, the MD5 value generated will always be the same. We use MD5s to ensure that files are not corrupted during copying and that they stay the same (“fixed”) over time. QCTools is an open source program developed by the Bay Area Video Coalition and its collaborators. The program analyzes the content of a digitized asset, generates reports, and facilitates the inspection of videos. BagIt is a file packaging format developed by the Library of Congress and partners that facilitates the secure transfer of data. MediaInfo is a tool that reports technical metadata about media files. It’s used by many in the AV and archives communities. PREMIS is a metadata standard used to record data about an object’s digital preservation.
Now a digression about my inventories – sorry in advance. ¯\_(ツ)_/¯
I keep two active inventories of all digitized files received. One is an Excel spreadsheet “checksum inventory” in which I track if a GUID was supposed to be delivered but was not received, or if a GUID was delivered more than once. I also use it to confirm that the checksums George Blood gave us match the checksums we generate from the delivered files, and it serves as a backup for checksum storage and organization during the project. The inventory has a master sheet with info for every GUID, and then each tape has an individual sheet with an inventory and checksums of its contents. I set up simple formulas that report any GUIDs or checksums that have issues. I could use scripts to automate the checksum validation process, but I like having the data visually organized for the NewsHour project. Given the relatively small volume of fixity checking I’m doing this manual verification works fine for this project.
The other inventory is the Approval Tracker spreadsheet in our Google Sheets NewsHour Workflow workbook (click here for link). The Approval Tracker is used to manage reporting about GUID’s ingesting and digital preservation workflow status. I record in it when I have finished the digital preservation workflow on a batch, and I mark when the files have been approved by all project partners. Partners have two months from the date of delivery to report approvals to George Blood. Once the files are approved they’re automatically placed on the Intern Review sheet for the arrangement and description phase of our workflow.
Okay, forgive me for that, now back to WGBH’s ingest and digital preservation workflow for the NewsHour project!
The first thing I do when we receive a shipment from George Blood is the essential routine I learned the hard way while stocking a retail store – always make sure everything that you paid for is actually there! I do this for both the physical LTO tapes, the files on the tapes, the PREMIS spreadsheet, the bags, and the delivery’s inventory. In Terminal I use a bash script that checks a list of GUIDs against the files present on our server to ensure that all bags have been correctly uploaded to the AMS. If we’ve received everything expected, I then organize the data from the inventory, copying the submission checksums into each tape’s spreadsheet in my Excel “checksum inventory”. Then I start working with the tapes.
Important background information is that the AAPB staff at WGBH work in a Mac environment, so what I’m writing about works for Mac, but it could easily be adopted to other systems. The first step I take with the tapes is to check the them for viruses. We use Sophos to do that in Terminal, with the Sweep command. If no viruses are found I then use one of our three LTO workstations to copy the MP4 proxies, proxy checksums, and QCTools XML reports from the LTO to a hard drive. I use the Terminal to do the copying, which I leave run while I go to other work. When the tape is done copying I use Terminal to confirm that the number of files copied matches the number of files I expected to copy. After that, I use it to run an MD5 report (with the find, -exec, and MD5 commands) on the copied files on the hard drive. I put those checksums into my Excel sheet and confirm they match the sums provided by George Blood, that there are no duplicates, and that we received everything we expected. If all is well, I put the checksum report onto our department server and move on to examining the delivered files’ specifications.
I use MediaInfo and MDQC to confirm that files we receive conform to our expectations. Again, this is something I could streamline with scripts if the workflow needed, but MDQC gets the job done for the NewsHour project. MDQC is a free program from AVPreserve that checks a group of files against a reference file and passes or fails them according to rules you specify. I set the test to check that the delivered batch are encoded to our specifications (click here for those). If any files fail the test, I use MediaInfo in Terminal to examine why they failed. I record any failures at this stage, or earlier in the checksum stage, in an issue tracker spreadsheet the project partners share, and report the problems to the vendor so that they can deliver corrected files.
Next I copy the set of copies on the hard drive onto other working hard drives for the interns to use during the review stage. I then skim a small sample of the files to confirm their content meets our expectations, comparing the digitizations to the transfer notes provided by George Blood in the PREMIS metadata. I review a few of the QCTools reports, looking at the video’s levels. I don’t spend much time doing that though, because the Library of Congress reviews the levels and characteristics of every master file. If everything looks good I move on, because all the proxies will be reviewed at an item level by our interns during the next phase of the project’s workflow anyways.
The last steps are to mark both the delivery batch’s digital preservation complete and the files as approved in the Approval Tracker, create a WGBH catalog record for the LTO, run a final MD5 manifest of the LTO and hard drive, upload some preservation metadata (archival LTO name, file checksums, and the project’s internal identifying code) to the AMS, and place the LTO and drive in our vault. The interns then review and describe the records and, after that, the GUIDs move into our access workflow. Look forward to future blog posts about those phases!
In January 2016, the Council on Library and Information Resources awarded WGBH, the Library of Congress, WETA, and NewsHour Productions, LLC a grant to digitize, preserve, and make publicly accessible on the AAPB website 32 years of NewsHour predecessor programs, from October 1975 to December 2007, that currently exist on obsolete analog formats. Described by co-creator Robert MacNeil as “a place where the news is allowed to breathe, where we can calmly, intelligently look at what has happened, what it means and why it is important,” the NewsHour has consistently provided a forum for newsmakers and experts in many fields to present their views at length in a format intended to achieve clarity and balance, rather than brevity and ratings. A Gallup Poll found the NewsHour America’s “most believed” program. We are honored to preserve this monumental series and include it in AAPB.
Today, we’re pleased to update you on our project progress, specifically regarding the new digitization project workflows that we have developed and implemented to achieve the goals of the project.
The physical work digitizing the NewsHour tapes and ingesting the new files across the project collaborators has been moving forward since last fall and is now healthily and steadily progressing. Like many projects, ours started out as a great idea with many enthusiastic partners – and that’s good, because we needed some enthusiasm to help us sort out a practical workflow for simultaneously tracking, ingesting, quality checking, digitally preserving, describing, and making available at least 7512 unique programs!
In practice the workflow has become quite different from what the AAPB experienced with our initial project to digitize 40,000 hours of programming from more than 100 stations. With NewsHour, we started by examining the capabilities of each collaborator and what they already intended to do regarding ingestion and quality control on their files. That survey identified efficiencies: The Library of Congress (the Library) took the lead on ingesting preservation quality files and conducting item level quality control of the files. WGBH focused on ingestion of the proxies and communication with George Blood, the digitization vendor. The Library uses the Baton quality control software to individually pass or fail every file received. At WGBH, we use MDQC from AVPreserve to check that the proxy files we receive are encoded in accordance with our desired specifications. Both institutions use scripts to validate the MD5 file checksums the vendor provides us. If any errors are encountered, we share them in a Google Sheet and WGBH notifies the vendor. The vendor then rectifies the errors and submits a replacement file. Once approved, it is time for WGBH to make the files accessible on the AAPB website.
I imagined that making the files accessible would be a smooth routine – I would put the approved files online and everything would be great. What a nice thought that was! In truth, any one work (Global Unique Identifier or “GUID” – our unique work level identifier) could have many factors that influence what actions we need to be taken to prepare it to go online. When I started reviewing the files we were receiving, looking at transcripts, and trying to keep track of the data and where various GUIDs were in the workflow, I realized that the “some spreadsheets and my mind” system I intended to employ would result in too many GUIDs falling through the cracks, and would likely necessitate far too much duplicate work. I decided to identify the possible statuses of GUIDs in the NewsHour series and every action that would need to be taken to resolve each status. After I stared at a wall for probably too long, my coworkers found me with bloodshot eyes (JK?) and this map:
Some of the statuses I identified are:
Tapes we do not want captured
Tapes that are not able to be captured
GUIDs where the digitization is not yet approved
GUIDs that don’t have transcripts
GUIDs that have transcripts, but they don’t match the content
GUIDs that are not a broadcast episode of the NewsHour
GUIDs that are incomplete recordings
GUIDs that need redacting
GUIDs that passed QC but should not have
Every status has multiple actions that need to be taken to resolve that issue and move the GUID towards being accessible. The statuses are not mutually exclusive, though some are contingent on or preclude others. It was immediately clear to me that this would be too much to manually track and that I needed a centralized automated solution. The system would have to allow simultaneous users and would need to be low cost and maintenance. After discussions with my colleagues, we decided that the best solution would be a Google Spreadsheet that everyone at the AAPB could share.
Here is a link to a copy of the NewsHour Workflow workbook we built. The workbook functions through a “Master List” with a row of metadata for every GUID, an “Intern Review” phase worksheet that automatically assigns statuses to GUIDs based on answers to questions, workflow “Tracker” sheets with resolutive actions for each status, and a “Master GUID Status Sheet” that automatically displays the status of every GUID and where each one is in the overall workflow. Some actions in trackers automatically place the GUID into another tracker – for instance, if a reviewer working on an episode for which we don’t have a transcript in the “No Transcript Tracker” and that GUID is identified as having content that needs to be redacted, the GUID is automatically placed on the “Redaction Tracker”.
A broad description of our current project workflow is: All of the project’s GUIDs are on the “Master GUID List” and their presence on that list automatically puts them on the “Master GUID Status Sheet”. When we receive a GUID’s digitized file, staff put the GUID on the “Approval Tracker”. When a GUID passes both WGBH and the Library’s QC workflows it is marked approved on the “Approval Tracker” and automatically placed on the “Intern Review Sheet.” Interns review each GUID and answer questions about the content and transcript, and the answers to those questions automatically place the GUID into different status trackers. We then use the trackers to track actions that resolve the GUIDs statuses. When a GUID’s issues in all the status trackers are resolved, it is marked as “READY!” to go online and placed in the “AAPB Online Tracker.” When we’ve updated the GUID’s metadata, put the file online, and recorded those actions in the “AAPB Online Tracker,” the GUID is automatically marked complete. Additionally, any statuses that indicate a GUID cannot go online (for instance, a tape was in fatal condition and unable to be captured) are marked as such in the “Master GUID Status Sheet.” This function helps us differentiate between GUIDs that will not be able to go online and GUIDs that are not yet online but should be when the project is complete.
Here is a picture of a portion of the “Master GUID Status Sheet.”’
The workbook functions through cross-sheet references and simple logic. It is built with mostly “IF,” “COUNTIF,” and “VLOOKUP” statements. Its functionality depends on users inputting the correct values in action cells and confirming that they’ve completed their work, but generally those values are locked in with data validation rules and sheet permissions. The workflow review I had conducted proved valuable because it provided the logic needed to construct the formulas and tracking sheets.
Building the workflow manager in Google Sheets took a few drafts. I tested the workflow with our first few NewsHour pilot digitizations, unleashed it on a few kind colleagues, and then improved it with their helpful feedback. I hope that the workbook will save us time figuring out what needs to happen to each GUID and will help prevent any GUIDs from falling through the cracks or incorrectly being put online. Truthfully, the workbook struggles under its own weight sometimes (at one point in my design I reached the 2,000,000 cell limit and had to delete all the extra cells spreadsheet programs always automatically make). Anyone conducting a project any larger or more complicated than the NewsHour would likely need to upgrade to a true workflow management software or a program designed to work from the command line. I hope, if you’re interested, that you take some time to try out the copy of the NewsHour Workflow workbook! If you’d like more information, a link to our workflow documentation that further explains the workbook can be provided.
In 2015, the Institute of Museum and Library Services awarded a generous grant to WGBH on behalf of the American Archive of Public Broadcasting (AAPB) to develop the AAPB National Digital Stewardship Residency (NDSR). Through this project, we have placed seven graduates of master’s degree programs in digital stewardship residencies at public media organizations around the country.
AAPB NDSR has already yielded dozens of great resources for the public media and audiovisual preservation community – and the residents aren’t even halfway done yet! As we near the program’s midpoint, we wanted to catch you up on the program so far.
In August 2016, the residents dispersed to their host stations, and began recording their experiences in a series of thoughtful blog posts, covering topics from home movies to DAM systems to writing in Python.
August also kicked off our first series of guest webinars, focusing on a range of topics of interest to audiovisual and digital preservation professionals. Most webinars were recorded, and all have slides available.
The residents also hosted two great panel presentations, first in September at the International Association of Sound and Audiovisual Archives Conference, and in November at the Association of Moving Image Archivists Conference. The AMIA session in particular generated a lot of Twitter chatter; you can see a roundup here.
To keep up with AAPB NDSR blog posts, webinar recordings, and project updates as they happen, follow the AAPB NDSR site at ndsr.americanarchive.org.
We have two free webinars coming up in January from our AAPB NDSR residents!
Challenges of Removable Media in Digital Preservation (Eddy Colloton) Thursday, January 12th, 3:00 PM ET
Removable storage media could be considered the most ubiquitous of digital formats. From floppy disks to USB flash drives, these portable, inexpensive and practical devices have been relied upon by all manner of content producers. Unfortunately, removable media is rarely designed with long-term storage in mind. Optical media is easy to scratch, flash drives can “leak” electrons, and floppy disks degrade over time. Each of these formats are unique, and carry with them their own risks. This webinar, open to the public, will focus on floppy disks, optical media, and flash drives from a preservation perspective. The discussion will include a brief description of the way information is written and stored on such formats, before detailing solutions and technology for retrieving data from these unreliable sources.
Demystifying FFmpeg/FFplay (Andrew Weaver) Thursday, January 26th, 3:00 PM ET
The FFmpeg/FFplay combination is a surprisingly multifaceted tool that can be used in myriad ways within A/V workflows. This webinar will present an introduction to basic FFmpeg syntax and applications (such as basic file transcoding) before moving into examples of alternate uses. These include perceptual hashing, OCR, visual/numerical signal analysis and filter pads.
WGBH Awarded National Endowment for the Humanities Grant to Support Public Media Content Management Tools and Training
$345,000 will support training materials for PBCore metadata management
Boston, Mass. (December 14, 2016) – WGBH Educational Foundation is pleased to announce that the National Endowment for the Humanities (NEH) has awarded WGBH a $345,000 Preservation and Access Research and Development grant to pursue the PBCore Development and Training Project. Short for “Public Broadcasting Metadata Dictionary,” PBCore is a metadata schema – a standard for organizing information – for the management of public media collections in the United States.
WGBH will use the grant funds to develop tools, methodologies and training workshops to make the standard more accessible to archivists and public media organizations over the course of this 27-month project. Deliverables for the project will include a PBCore cataloging tool, updates to the website, webinars and other training materials, sample records and more.
WGBH’s Media Library and Archives (MLA) has been responsible for the ongoing development of PBCore since 2013, when the Corporation for Public Broadcasting (CPB) selected WGBH and the Library of Congress as the permanent stewards of the American Archive of Public Broadcasting (AAPB). The AAPB coordinates a national effort to preserve at-risk public media before its content is lost to posterity and manages digital access to the unique programming that public stations have aired over the past 60 years. Using PBCore to describe public media content enables anyone managing media content to easily organize and share what is being created today. WGBH is honored that the NEH, which awards grants to top-rated proposals for the preservation of America’s rich history and cultural heritage, has chosen to support this work.
The $345,000 grant award will fund a number of initiatives designed to enhance PBCore’s accessibility among archivists, public media organizations, and archival educators. Deliverables for the project will include:
a new widely available open-source PBCore cataloging tool
improvements and updates to existing PBCore tools
metadata crosswalks and sample integrations with a number of commonly-used metadata standards
updated PBCore-based Excel templates, sample records, and use cases that expand upon existing guidelines and put them in plain language for non-archivists
updates to the PBCore website that incorporate the new tools and documentation in an accessible and user-friendly manner
a set of free webinars explaining the use of the new tools
a printable PDF manual collecting all PBCore documentation and cataloging guidelines
PBCore user training workshops held at major conferences
two fully-funded PBCore train-the-trainer workshops which will fund public media professionals and archival educators to learn about training others in PBCore
WGBH looks forward to working with the PBCore user communities to lower barriers around the description and preservation of public media materials.
About WGBH WGBH Boston is America’s preeminent public broadcaster and the largest producer of PBS content for TV and the Web, including Masterpiece, Antiques Roadshow, Frontline, Nova, American Experience, Arthur, Curious George, and more than a dozen other prime-time, lifestyle, and children’s series. WGBH also is a leader in educational multimedia, including PBS LearningMedia, and a pioneer in technologies and services that make media accessible to the 36 million Americans who are deaf, hard of hearing, blind, or visually impaired. WGBH has been recognized with hundreds of honors: Emmys, Peabodys, duPont-Columbia Awards…even two Oscars. Find more information at www.wgbh.org.
About the American Archive of Public Broadcasting The American Archive of Public Broadcasting (AAPB) is a collaboration between the Library of Congress and the WGBH Educational Foundation to coordinate a national effort to preserve at-risk public media before its content is lost to posterity and provide a central web portal for access to the unique programming that public stations have aired over the past 60 years. To date, over 40,000 hours of television and radio programming contributed by more than 100 public media organizations and archives across the United States have been digitized for long-term preservation and access. The entire collection is available on location at WGBH and the Library of Congress, and more than 16,000 programs are available online at americanarchive.org.
About the National Endowment for the Humanities Created in 1965 as an independent federal agency, the National Endowment for the Humanities supports research and learning in history, literature, philosophy, and other areas of the humanities by funding selected, peer-reviewed proposals from around the nation. Additional information about the National Endowment for the Humanities and its grant programs is available at: www.neh.gov.
The Library of Congress is pleased to announce the release of the 2016-2017 Recommended Formats Statement (http://www.loc.gov/preservation/resources/rfs/). The proliferation of ways in which works can be created and distributed is a challenge and an opportunity for the Library (and for all institutions and organizations which seek to build collections of creative works) and the Recommended Formats Statement is one way in which the Library seeks to meet the challenge and take full advantage of the opportunity. By providing guidance in the form of technical characteristics and metadata which best support the preservation and long-term access of digital works (and analog works as well), the Library hopes to encourage creators, vendors, archivists and librarians to use the recommended formats in order to further the creation, acquisition and preservation of creative works which will be available for the use of future generations at the Library of Congress and other cultural memory organizations.
The engagement with the Statement that the Library has seen from others has been extremely heartening. In response to interest in our work from representatives in the architectural community who see their design work imperiled by insufficient attention to digital preservation, we have updated the Statement to align more closely with developments in this field. Most importantly of all, we now include websites as a category of its own in the Statement. Websites are probably the largest field of digital expression available for creators today, yet most creators tend to take a passive role in ensuring the preservation and long-term access of their websites. By including websites in the Recommended Formats Statement, we hope to encourage website creators to engage more fully in digital preservation, as we aim to do with all the other forms of digital works included in the Statement, by making their websites more preservation-friendly.
The Library remains committed to acquiring and preserving digital works and to providing whatever support it can to other similarly committed stakeholders. We shall continue to build our collections with their preservation and long-term access firmly in mind; and we shall continue to engage with others in the community in efforts such as the Recommended Formats Statement. We encourage any and all feedback and comments (http://www.loc.gov/preservation/resources/rfs/contacts.html) others might have on the Statement that might make it more useful for both our needs and for the needs of anyone who might find it worthwhile in their own work. And we shall continue to engage in an annual review process to ensure that it meets the needs of all stakeholders in the preservation and long-term access of creative works.
The following is a guest post by Rebecca Fraimow, National Digital Stewardship Resident at WGBH and the AAPB.
As the National Digital Stewardship Resident with WGBH and the AAPB, I’ve backed up a lot of drives, designed a lot of workflow diagrams, and written up a lot of documentation, but for my final deliverable for the residency, I got to do something with a slightly broader focus: create a webinar that focused on digital preservation concepts through the lens of the unique needs of a public broadcasting organization.
Although I’ve spent most of the past year in a public media context, WGBH is pretty unique among public media organizations: we have a strong archival department, and a dedicated budget for preservation. That gives us a lot of opportunities to invest in tools and techniques that most public media organizations aren’t going to have. As a result, creating a webinar about digital preservation best practices from a PB perspective is not just as simple as saying ‘here’s what we do and why we do it’ – while it would be great if all stations had the same level of resources, just getting that level of buy-in is something that most archivally-minded station employees have to fight really hard to make a case for.
Therefore, instead of designing the webinar based around our workflows at WGBH, I sent out an open call for topics to see what the audience of (primarily AAPB) stations really wanted to hear about. I got a wide range of responses:
– where to start when creating a digital library
– best practices for migrating videotape to digital files
– how to manage the volume with a small staff
– tools for embedding metadata into audio and video files
– systems for small organizations with little IT support
– integrity checking, video file standards, naming conventions
– getting producers onboard from the get-go
– how to go back into the archives where proper documentation doesn’t exist
– how to properly use the PBCore field called instantiationStandard
Obviously, I don’t have the answer to all these questions (to be honest, instantiationStandard is kind of a confusing field) and, of course, for many of them, there is no right answer — as I can tell you from the experiences of my entire NDSR cohort, even organizations with huge dedicated preservation departments are still trying to figure out the solutions that make the most sense for them. Next year, the AAPB will be sending a new crop of NDSR residents into public media stations to help grapple with some of these issues, but before finding answers, the first step is figuring out the right questions to ask. The webinar is designed to provide a guide to some of those questions, and an overview of the issues to consider when making a case for digital preservation.
You can view the full webinar below (click on the title to open in a larger screen):