PBS NewsHour Digitization Project Update: Ingest and Digital Preservation Workflows

In our last blog post (click for link) on managing the PBS NewsHour Digitization Project, I briefly discussed WGBH’s digital preservation and ingest workflows. Though many of our procedures follow standard practices common to archival work, I thought it would be worthwhile to cover them more in-depth for those who might be interested. We at WGBH are responsible for describing, providing access to, and digitally preserving the proxy files for all of our projects. The Library of Congress preserves the masters. In this post I cover how we preserve and prepare to provide access to proxy files.

Before a file is digitized, we ingest the item-level tape inventory generated during the project planning stages into our Archival Management System (AMS – see link for the Github). The inventory is a CSV that we normalized to our standards, upload, and then map to PBCore in MINT, or “Metadata Interoperability Services,” an open-source web-based plugin designed for metadata mapping and aggregation. The AMS ingests the data and creates new PBCore records, which are stored as individual elements in tables in the AMS. The AMS generates a unique ID (GUID) for each asset. We then export the metadata, provide it to the digitization vendor, and use the GUID identifiers to track records throughout the project workflow.

Screen Shot 2016-07-07 at 3.30.19 PM.png
Mapping a CSV to PBCore in MINT

For the NewsHour project, George Blood L.P. receives the inventory metadata and the physical tapes to digitize to our specifications. For every GUID, George Blood creates a MP4 proxy for access, a JPEG2000 MXF preservation master, sidecar MD5 checksums for both video files, and a QCTools report XML for the master. George Blood names each file after the corresponding GUID and organizes the files into an individual folder for each GUID. During the digitization process, they record digitization event metadata in a PREMIS spreadsheets. Those sheets are regularly automatically harvested by the AMS, which inserts the metadata into the corresponding catalog records. With each delivery batch George Blood also provides MediaInfo XML saved in BagIt containers for every GUID, and a text inventory of the delivery’s assets and corresponding MD5 checksums. The MediaInfo bags are uploaded via FTP to the AMS, which harvests technical metadata from them and creates PBCore instantiation metadata records for the proxies and masters. WGBH receives the digitized files on LTO 6 tapes, and the Library of Congress receives theirs on rotating large capacity external hard drives.

For those who are not familiar with the tools I just mentioned, I will briefly describe them. A checksum is a computer generated cryptographic hash. There are different types of hashes, but we use MD5, as do many other archives. The computer analyzes a file with the MD5 algorithm and delivers a 32 character code. If a file does not change, the MD5 value generated will always be the same. We use MD5s to ensure that files are not corrupted during copying and that they stay the same (“fixed”) over time. QCTools is an open source program developed by the Bay Area Video Coalition and its collaborators. The program analyzes the content of a digitized asset, generates reports, and facilitates the inspection of videos. BagIt is a file packaging format developed by the Library of Congress and partners that facilitates the secure transfer of data. MediaInfo is a tool that reports technical metadata about media files. It’s used by many in the AV and archives communities. PREMIS is a metadata standard used to record data about an object’s digital preservation.

Now a digression about my inventories – sorry in advance. ¯\_(ツ)_/¯

I keep two active inventories of all digitized files received. One is an Excel spreadsheet “checksum inventory” in which I track if a GUID was supposed to be delivered but was not received, or if a GUID was delivered more than once. I also use it to confirm that the checksums George Blood gave us match the checksums we generate from the delivered files, and it serves as a backup for checksum storage and organization during the project. The inventory has a master sheet with info for every GUID, and then each tape has an individual sheet with an inventory and checksums of its contents. I set up simple formulas that report any GUIDs or checksums that have issues. I could use scripts to automate the checksum validation process, but I like having the data visually organized for the NewsHour project. Given the relatively small volume of fixity checking I’m doing this manual verification works fine for this project.

Screen Shot 2017-04-10 at 2.37.28 PM.png
Excel “checksum inventory” sheet page for NewsHour LTO tape #27.

The other inventory is the Approval Tracker spreadsheet in our Google Sheets NewsHour Workflow workbook (click here for link). The Approval Tracker is used to manage reporting about GUID’s ingesting and digital preservation workflow status. I record in it when I have finished the digital preservation workflow on a batch, and I mark when the files have been approved by all project partners. Partners have two months from the date of delivery to report approvals to George Blood. Once the files are approved they’re automatically placed on the Intern Review sheet for the arrangement and description phase of our workflow.

Screen Shot 2017-04-10 at 2.38.11 PM.png
The Approval Tracker in the NewsHour Workflow workbook.

Okay, forgive me for that, now back to WGBH’s  ingest and digital preservation workflow for the NewsHour project!

The first thing I do when we receive a shipment from George Blood is the essential routine I learned the hard way while stocking a retail store – always make sure everything that you paid for is actually there! I do this for both the physical LTO tapes, the files on the tapes, the PREMIS spreadsheet, the bags, and the delivery’s inventory. In Terminal I use a bash script that checks a list of GUIDs against the files present on our server to ensure that all bags have been correctly uploaded to the AMS. If we’ve received everything expected, I then organize the data from the inventory, copying the submission checksums into each tape’s spreadsheet in my Excel “checksum inventory”. Then I start working with the tapes.

Important background information is that the AAPB staff at WGBH work in a Mac environment, so what I’m writing about works for Mac, but it could easily be adopted to other systems. The first step I take with the tapes is to check the them for viruses. We use Sophos to do that in Terminal, with the Sweep command. If no viruses are found I then use one of our three LTO workstations to copy the MP4 proxies, proxy checksums, and QCTools XML reports from the LTO to a hard drive. I use the Terminal to do the copying, which I leave run while I go to other work. When the tape is done copying I use Terminal to confirm that the number of files copied matches the number of files I expected to copy. After that, I use it to run an MD5 report (with the find, -exec, and MD5 commands) on the copied files on the hard drive. I put those checksums into my Excel sheet and confirm they match the sums provided by George Blood, that there are no duplicates, and that we received everything we expected. If all is well, I put the checksum report onto our department server and move on to examining the delivered files’ specifications.

I use MediaInfo and MDQC to confirm that files we receive conform to our expectations. Again, this is something I could streamline with scripts if the workflow needed, but MDQC gets the job done for the NewsHour project. MDQC is a free program from AVPreserve that checks a group of files against a reference file and passes or fails them according to rules you specify. I set the test to check that the delivered batch are encoded to our specifications (click here for those). If any files fail the test, I use MediaInfo in Terminal to examine why they failed. I record any failures at this stage, or earlier in the checksum stage, in an issue tracker spreadsheet the project partners share, and report the problems to the vendor so that they can deliver corrected files.

Screen Shot 2017-04-10 at 2.39.55 PM
MDQC’s simple and effective user interface.

Next I copy the set of copies on the hard drive onto other working hard drives for the interns to use during the review stage. I then skim a small sample of the files to confirm their content meets our expectations, comparing the digitizations to the transfer notes provided by George Blood in the PREMIS metadata. I review a few of the QCTools reports, looking at the video’s levels. I don’t spend much time doing that though, because the Library of Congress reviews the levels and characteristics of every master file. If everything looks good I move on, because all the proxies will be reviewed at an item level by our interns during the next phase of the project’s workflow anyways.

The last steps are to mark both the delivery batch’s digital preservation complete and the files as approved in the Approval Tracker, create a WGBH catalog record for the LTO, run a final MD5 manifest of the LTO and hard drive, upload some preservation metadata (archival LTO name, file checksums, and the project’s internal identifying code) to the AMS, and place the LTO and drive in our vault. The interns then review and describe the records and, after that, the GUIDs move into our access workflow. Look forward to future blog posts about those phases!

PBS NewsHour Digitization Project Update: Workflow Management

NewsHour_Project_LogosIn January 2016, the Council on Library and Information Resources awarded WGBH, the Library of Congress, WETA, and NewsHour Productions, LLC a grant to digitize, preserve, and make publicly accessible on the AAPB website 32 years of NewsHour predecessor programs, from October 1975 to December 2007, that currently exist on obsolete analog formats. Described by co-creator Robert MacNeil as “a place where the news is allowed to breathe, where we can calmly, intelligently look at what has happened, what it means and why it is important,” the NewsHour has consistently provided a forum for newsmakers and experts in many fields to present their views at length in a format intended to achieve clarity and balance, rather than brevity and ratings. A Gallup Poll found the NewsHour America’s “most believed” program. We are honored to preserve this monumental series and include it in AAPB.

Today, we’re pleased to update you on our project progress, specifically regarding the new digitization project workflows that we have developed and implemented to achieve the goals of the project.

The physical work digitizing the NewsHour tapes and ingesting the new files across the project collaborators has been moving forward since last fall and is now healthily and steadily progressing. Like many projects, ours started out as a great idea with many enthusiastic partners – and that’s good, because we needed some enthusiasm to help us sort out a practical workflow for simultaneously tracking, ingesting, quality checking, digitally preserving, describing, and making available at least 7512 unique programs!

In practice the workflow has become quite different from what the AAPB experienced with our initial project to digitize 40,000 hours of programming from more than 100 stations. With NewsHour, we started by examining the capabilities of each collaborator and what they already intended to do regarding ingestion and quality control on their files. That survey identified efficiencies: The Library of Congress (the Library) took the lead on ingesting preservation quality files and conducting item level quality control of the files. WGBH focused on ingestion of the proxies and communication with George Blood, the digitization vendor. The Library uses the Baton quality control software to individually pass or fail every file received. At WGBH, we use MDQC from AVPreserve to check that the proxy files we receive are encoded in accordance with our desired specifications. Both institutions use scripts to validate the MD5 file checksums the vendor provides us. If any errors are encountered, we share them in a Google Sheet and WGBH notifies the vendor. The vendor then rectifies the errors and submits a replacement file. Once approved, it is time for WGBH to make the files accessible on the AAPB website.

I imagined that making the files accessible would be a smooth routine – I would put the approved files online and everything would be great. What a nice thought that was! In truth, any one work (Global Unique Identifier or “GUID” – our unique work level identifier) could have many factors that influence what actions we need to be taken to prepare it to go online. When I started reviewing the files we were receiving, looking at transcripts, and trying to keep track of the data and where various GUIDs were in the workflow, I realized that the “some spreadsheets and my mind” system I intended to employ would result in too many GUIDs falling through the cracks, and would likely necessitate far too much duplicate work. I decided to identify the possible statuses of GUIDs in the NewsHour series and every action that would need to be taken to resolve each status. After I stared at a wall for probably too long, my coworkers found me with bloodshot eyes (JK?) and this map:

newshourworkflowwall
(It seems appropriate that the fire alarm is in this picture of the map)

Some of the statuses I identified are:

  • Tapes we do not want captured
  • Tapes that are not able to be captured
  • GUIDs where the digitization is not yet approved
  • GUIDs that don’t have transcripts
  • GUIDs that have transcripts, but they don’t match the content
  • GUIDs that are not a broadcast episode of the NewsHour
  • GUIDs that are incomplete recordings
  • GUIDs that need redacting
  • GUIDs that passed QC but should not have

Every status has multiple actions that need to be taken to resolve that issue and move the GUID towards being accessible. The statuses are not mutually exclusive, though some are contingent on or preclude others. It was immediately clear to me that this would be too much to manually track and that I needed a centralized automated solution. The system would have to allow simultaneous users and would need to be low cost and maintenance. After discussions with my colleagues, we decided that the best solution would be a Google Spreadsheet that everyone at the AAPB could share.

Here is a link to a copy of the NewsHour Workflow workbook we built. The workbook functions through a “Master List” with a row of metadata for every GUID, an “Intern Review” phase worksheet that automatically assigns statuses to GUIDs based on answers to questions, workflow “Tracker” sheets with resolutive actions for each status, and a “Master GUID Status Sheet” that automatically displays the status of every GUID and where each one is in the overall workflow. Some actions in trackers automatically place the GUID into another tracker – for instance, if a reviewer working on an episode for which we don’t have a transcript in the “No Transcript Tracker” and that GUID is identified as having content that needs to be redacted, the GUID is automatically placed on the “Redaction Tracker”.

A broad description of our current project workflow is: All of the project’s GUIDs are on the “Master GUID List” and their presence on that list automatically puts them on the “Master GUID Status Sheet”. When we receive a GUID’s digitized file, staff put the GUID on the “Approval Tracker”. When a GUID passes both WGBH and the Library’s QC workflows it is marked approved on the “Approval Tracker” and automatically placed on the “Intern Review Sheet.” Interns review each GUID and answer questions about the content and transcript, and the answers to those questions automatically place the GUID into different status trackers. We then use the trackers to track actions that resolve the GUIDs statuses. When a GUID’s issues in all the status trackers are resolved, it is marked as “READY!” to go online and placed in the “AAPB Online Tracker.” When we’ve updated the GUID’s metadata, put the file online, and recorded those actions in the “AAPB Online Tracker,” the GUID is automatically marked complete. Additionally, any statuses that indicate a GUID cannot go online (for instance, a tape was in fatal condition and unable to be captured) are marked as such in the “Master GUID Status Sheet.” This function helps us differentiate between GUIDs that will not be able to go online and GUIDs that are not yet online but should be when the project is complete.

Here is a picture of a portion of the “Master GUID Status Sheet.”’

newshourworkflowstatus
Right now there are a lot of red GUIDs in this picture of the Master sheet, but in the coming months they will be switching to green!

The workbook functions through cross-sheet references and simple logic. It is built with mostly “IF,” “COUNTIF,” and “VLOOKUP” statements. Its functionality depends on users inputting the correct values in action cells and confirming that they’ve completed their work, but generally those values are locked in with data validation rules and sheet permissions. The workflow review I had conducted proved valuable because it provided the logic needed to construct the formulas and tracking sheets.

Building the workflow manager in Google Sheets took a few drafts. I tested the workflow with our first few NewsHour pilot digitizations, unleashed it on a few kind colleagues, and then improved it with their helpful feedback. I hope that the workbook will save us time figuring out what needs to happen to each GUID and will help prevent any GUIDs from falling through the cracks or incorrectly being put online. Truthfully, the workbook struggles under its own weight sometimes (at one point in my design I reached the 2,000,000 cell limit and had to delete all the extra cells spreadsheet programs always automatically make). Anyone conducting a project any larger or more complicated than the NewsHour would likely need to upgrade to a true workflow management software or a program designed to work from the command line. I hope, if you’re interested, that you take some time to try out the copy of the NewsHour Workflow workbook! If you’d like more information, a link to our workflow documentation that further explains the workbook can be provided.

This post was written by Charles Hosale, WGBH.

AAPB NDSR Resources Round-up

 

In 2015, the Institute of Museum and Library Services awarded a generous grant to WGBH on behalf of the American Archive of Public Broadcasting (AAPB) to develop the AAPB National Digital Stewardship Residency (NDSR). Through this project, we have placed seven graduates of master’s degree programs in digital stewardship residencies at public media organizations around the country.

AAPB NDSR  has already yielded dozens of great resources for the public media and audiovisual preservation community – and the residents aren’t even halfway done yet! As we near the program’s midpoint, we wanted to catch you up on the program so far.

We started off in July 2016 with Immersion Week in Boston, which featured presentations on the history of public media and the AAPB, an overview of physical and digital audiovisual materials, an introduction to audiovisual metadata, and instructional seminars on digital preservation workflows, project management, and professional development. Attendees also participated in a full-day session on “Thinking Like a Computer” and a hands-on command line workshop.

Several sessions from Immersion Week were filmed by
WGBH Forum Network, including:

In August 2016, the residents dispersed to their host stations, and began recording their experiences in a series of thoughtful blog posts, covering topics from home movies to DAM systems to writing in Python.

AAPB NDSR blog posts to date include:

Digital Stewardship at KBOO Community Radio,” Selena Chau (8/9/16)

Metadata Practices at Minnesota Public Radio,” Kate McManus (8/15/16)

NDSA, data wrangling, and KBOO treasures,” Selena Chau (8/30/16)

Minnesota Books and Authors,” Kate McManus (9/23/16)

Snapshot from the IASA Conference: Thoughts on the 2nd Day,” Eddy Colloton (9/29/16)

Who just md5deep-ed and redirected all them checksums to a .csv file? This gal,” Lorena Ramirez-Lopez (10/6/16)

IASA Day 1 and Voice to Text Recognition,” Selena Chau (10/11/16)

IASA – Remixed,” Kate McManus (10/12/16)

Learning GitHub (or, if I can do it, you can too!)” Andrew Weaver (10/13/16)
Home Movie Day,” Eddy Colloton (10/15/16)

Snakes in the Archive,” Adam Lott (10/20/16)

Vietnam, Oral Histories, and the WYSO Archives Digital Humanities Symposium,” Tressa Graves (11/7/16)

Archives in Conversation (A Glimpse into the Minnesota Archives Symposium, 2016),” Kate McManus (11/15/16)

Inside the WHUT video library clean-up – part 1: SpaceSaver,” Lorena Ramirez-Lopez (11/21/16)

Is there something that does it all?: Choosing a metadata management system,” Selena Chau (11/22/16)

Inside the WHUT video library clean-up – part 2: lots of manual labor,” Lorena Ramirez-Lopez (12/20/16)

Just Ask For Help Already!” Eddy Colloton (12/22/16)

August also kicked off our first series of guest webinars, focusing on a range of topics of interest to audiovisual and digital preservation professionals. Most webinars were recorded, and all have slides available.

AAPB NDSR webinars to date include:

Metadata: Storage, Modeling and Quality,” by Kara Van Malssen, Partner & Senior Consultant at AVPreserve

Public Media Production Workflows,” by Leah Weisse, WGBH Digital Archive Manager/Production Archival Compliance Manager (slides)

Imposter Syndrome” by Jen LaBarbera, Head Archivist at Lambda Archives of San Diego, and Dinah Handel, Mass Digitization Coordinator at the NYPL (slides)

Preservation and Access: Digital Audio,” by Erica Titkemeyer, Project Director and AV Conservator at the Southern Folklife Collection (slides)

Troubleshooting Digital Preservation,” by Shira Peltzman, Digital Archivist at UCLA Library (slides)

Studs Terkel Radio Archive: Tips and Tricks for Sharing Great Audio,” by Grace Radkins, Digital Content Librarian at Studs Terkel Radio Library (slides)

From Theory to Action: Digital Preservation Tools and Strategies,” by Danielle Spalenka, Project Director of the Digital POWRR Project (slides)

Our first two resident-hosted webinars (open to the public) will be happening this month! Registration and more info is available here.

The residents also hosted two great panel presentations, first in September at the International Association of Sound and Audiovisual Archives Conference, and in November at the Association of Moving Image Archivists Conference. The AMIA session in particular generated a lot of Twitter chatter; you can see a roundup here.

To keep up with AAPB NDSR blog posts, webinar recordings, and project updates as they happen, follow the AAPB NDSR site at ndsr.americanarchive.org.

Register for our upcoming webinars

We have two free webinars coming up in January from our AAPB NDSR residents!

Challenges of Removable Media in Digital Preservation (Eddy Colloton)
Thursday, January 12th, 3:00 PM ET

Removable storage media could be considered the most ubiquitous of digital formats. From floppy disks to USB flash drives, these portable, inexpensive and practical devices have been relied upon by all manner of content producers. Unfortunately, removable media is rarely designed with long-term storage in mind. Optical media is easy to scratch, flash drives can “leak” electrons, and floppy disks degrade over time. Each of these formats are unique, and carry with them their own risks. This webinar, open to the public, will focus on floppy disks, optical media, and flash drives from a preservation perspective. The discussion will include a brief description of the way information is written and stored on such formats, before detailing solutions and technology for retrieving data from these unreliable sources.

Register for “Challenges of Removing Media in Digital Preservation”

Demystifying FFmpeg/FFplay (Andrew Weaver)
Thursday, January 26th, 3:00 PM ET

The FFmpeg/FFplay combination is a surprisingly multifaceted tool that can be used in myriad ways within A/V workflows.  This webinar will present an introduction to basic FFmpeg syntax and applications (such as basic file transcoding) before moving into examples of alternate uses.  These include perceptual hashing, OCR, visual/numerical signal analysis and filter pads.

Register for “Demystifying FFmpeg/FFplay”

Library of Congress Releases 2016-2017 Recommended Formats Statement

The Library of Congress has released its latest version of the Library of Congress’ Recommended Formats Statement, including for audio-visual media. These recommendations are useful for organizations that are planning digitization projects or are developing methods to digitally preserve their “born digital” programming.

Print

The Library of Congress is pleased to announce the release of the 2016-2017 Recommended Formats Statement (http://www.loc.gov/preservation/resources/rfs/).  The proliferation of ways in which works can be created and distributed is a challenge and an opportunity for the Library (and for all institutions and organizations which seek to build collections of creative works) and the Recommended Formats Statement is one way in which the Library seeks to meet the challenge and take full advantage of the opportunity.  By providing guidance in the form of technical characteristics and metadata which best support the preservation and long-term access of digital works (and analog works as well), the Library hopes to encourage creators, vendors, archivists and librarians to use the recommended formats in order to further the creation, acquisition and preservation of creative works which will be available for the use of future generations at the Library of Congress and other cultural memory organizations.

The engagement with the Statement that the Library has seen from others has been extremely heartening.  In response to interest in our work from representatives in the architectural community who see their design work imperiled by insufficient attention to digital preservation, we have updated the Statement to align more closely with developments in this field.  Most importantly of all, we now include websites as a category of its own in the Statement.  Websites are probably the largest field of digital expression available for creators today, yet most creators tend to take a passive role in ensuring the preservation and long-term access of their websites.  By including websites in the Recommended Formats Statement, we hope to encourage website creators to engage more fully in digital preservation, as we aim to do with all the other forms of digital works included in the Statement, by making their websites more preservation-friendly.

The Library remains committed to acquiring and preserving digital works and to providing whatever support it can to other similarly committed stakeholders.  We shall continue to build our collections with their preservation and long-term access firmly in mind; and we shall continue to engage with others in the community in efforts such as the Recommended Formats Statement.  We encourage any and all feedback and comments (http://www.loc.gov/preservation/resources/rfs/contacts.html) others might have on the Statement that might make it more useful for both our needs and for the needs of anyone who might find it worthwhile in their own work.  And we shall continue to engage in an annual review process to ensure that it meets the needs of all stakeholders in the preservation and long-term access of creative works.

AAPB Releases Experimental API

This blog post was shared by Chuck McCallum, AAPB Developer at WGBH.

With the most recent release, the AAPB now has a public API. It’s an experiment at this point, but documentation is available, and we’ve put up a few examples. For example, you can explore coverage of different topics over the years, or see how coverage changes in different parts of the country. Let us know if you build anything interesting!

Screen Shot 2015-11-12 at 11.24.54 AM

Screen Shot 2015-11-12 at 11.25.12 AM

Digital Preservation for Public Broadcasting Webinar Recording is Available!

The following is a guest post by Rebecca Fraimow, National Digital Stewardship Resident at WGBH and the AAPB.

As the National Digital Stewardship Resident with WGBH and the AAPB, I’ve backed up a lot of drives, designed a lot of workflow diagrams, and written up a lot of documentation, but for my final deliverable for the residency, I got to do something with a slightly broader focus: create a webinar that focused on digital preservation concepts through the lens of the unique needs of a public broadcasting organization.

Rebecca Fraimow is the NDSR resident at WGBH and the AAPB.
Rebecca Fraimow is the NDSR resident at WGBH and the AAPB.

Although I’ve spent most of the past year in a public media context, WGBH is pretty unique among public media organizations: we have a strong archival department, and a dedicated budget for preservation.   That gives us a lot of opportunities to invest in tools and techniques that most public media organizations aren’t going to have. As a result, creating a webinar about digital preservation best practices from a PB perspective is not just as simple as saying ‘here’s what we do and why we do it’ – while it would be great if all stations had the same level of resources, just getting that level of buy-in is something that most archivally-minded station employees have to fight really hard to make a case for.

Therefore, instead of designing the webinar based around our workflows at WGBH, I sent out an open call for topics to see what the audience of (primarily AAPB) stations really wanted to hear about. I got a wide range of responses:

– where to start when creating a digital library
– best practices for migrating videotape to digital files
– how to manage the volume with a small staff
– tools for embedding metadata into audio and video files
– systems for small organizations with little IT support
– integrity checking, video file standards, naming conventions
– funding
– getting producers onboard from the get-go
– how to go back into the archives where proper documentation doesn’t exist
– how to properly use the PBCore field called instantiationStandard

Obviously, I don’t have the answer to all these questions (to be honest, instantiationStandard is kind of a confusing field) and, of course, for many of them, there is no right answer — as I can tell you from the experiences of my entire NDSR cohort, even organizations with huge dedicated preservation departments are still trying to figure out the solutions that make the most sense for them.  Next year, the AAPB will be sending a new crop of NDSR residents into public media stations to help grapple with some of these issues, but before finding answers, the first step is figuring out the right questions to ask.   The webinar is designed to provide a guide to some of those questions, and an overview of the issues to consider when making a case for digital preservation.

You can view the full webinar below (click on the title to open in a larger screen):

Digital Preservation for Public Broadcasting from American Archive on Vimeo.

The slides are available here:

http://www.slideshare.net/RebeccaFraimow/digital-preservation-for-public-media