AAPB Webinar Series with the Boston Library Consortium

logo.png

This past March, the American Archive of Public Broadcasting (AAPB) hosted two webinars with the Boston Library Consortium. This two-part webinar series provided an overview on the AAPB as well as review ways in which it can be effectively used as a resource for teaching and research.

Part I – “Accessibility of AAPB in Academic Libraries”
This webinar covered AAPB’s background, governance and infrastructure. Casey Kaufman, AAPB Project Manager, and Ryn Marchese, AAPB Engagement and Use Manager, discussed the scope, content and provenance of the AAPB collection; methods of searching, navigating, and accessing content in the AAPB; examples of the types of materials available in the AAPB collection, and the scholarly and research value of audiovisual collections and specifically public media archives.

Slides available: https://www.slideshare.net/RynMarchese/blc-webinar-part-1-accessibility-of-aapb-for-academic-libraries

 

In this webinar, panelists Casey Kaufman (WGBH), Ingrid Ockert (Princeton University), and Mark Williams (Dartmouth College), explored specific use cases for librarians and researchers in accessing and making use of the AAPB collection. They included a general overview of how scholars and researchers are seeking to use digital AV collections, a brief recap of how AAPB provides access to its collection to researchers and the general public, incorporating AAPB into subject-specific LibGuides, use of audiovisual collections in traditional historical research and in academic coursework, and examples of how AAPB metadata and transcripts can be used in digital humanities research and data mining.

Slides available: https://blc.org/sites/default/files/BLC_Uploads/Part%20II_BLCs%20AAPB%20Webinars_Speaker%20Slides.pdf

 

Special thanks to Jessica Hardin and Susan Stearns of the Boston Library Consortium for helping organizing this series!

Upcoming AAPB Webinar Featuring Kathryn Gronsbell, Digital Collections Manager at Carnegie Hall

DUuoHWcVQAABjGA

Photo courtesy of Rebecca Benson, @jeybecques, PBPF Fellow at University of Missouri.

This Thursday, March 15th at 8 pm EST, American Archive of Public Broadcasting (AAPB) staff will host a webinar with Kathryn Gronsbell, Digital Collections Manager at Carnegie Hall and will cover topics in documentation, including why documentation is important, what to think about when recording workflows for future practitioners, and where to find examples of good documentation in the wild.

The public is welcome to join for the first half hour. The last half hour will be limited to Q&A with our Public Broadcasting Preservation Fellows, who have now begun to inventory their digitized public broadcasting collections to be preserved in the AAPB.

Webinar URL: http://wgbh1.adobeconnect.com/documentation/

For anyone who missed the last webinar on tools for Quality Control, it’s now also available for viewing through this link: http://wgbh1.adobeconnect.com/psv1042lp222/.

*******************************

For more updates on the Public Broadcasting Preservation Fellowship project, follow the project at pbpf.americanarchive.org and on Twitter at #aapbpf, and come back in a few months to check out the results of their work: digitized content preserved in the American Archive of Public Broadcasting from our collaborating host organizations WUNCKOPNOklahoma Educational Television AuthorityGeorgia Public Broadcasting, and the Center for Asian American Media as well as documentation created to support ongoing audio and video preservation education at the University of MissouriUniversity of OklahomaClayton State UniversityUniversity of North Carolina at Chapel Hill, and San Jose State University.

Upcoming Webinar: AAPB’s Quality Control Tools and Techniques for Ingesting Digitized Collections

static1.squarespace.jpg

Oklahoma mentor Lisa Henry (left) cleaning a U-matic deck with Public Broadcasting Preservation Fellow Tanya Yule.

This Thursday, February 15th at 8 pm EST, American Archive of Public Broadcasting (AAPB) staff will host a webinar covering quality control tools and technologies used when ingesting digitized collections into the AAPB archive, including MDQC, MediaConch, Sonic Visualizer, and QCTools.

The public is welcome to join for the first half hour. The last half hour will be limited to Q&A with our Public Broadcasting Preservation Fellows, who are just now beginning the process of digitizing at-risk public broadcasting collections to be preserved in the AAPB.

Webinar URL: http://wgbh1.adobeconnect.com/psv1042lp222/

*******************************

For more updates on the Public Broadcasting Preservation Fellowship project, follow the project at pbpf.americanarchive.org and on Twitter at #aapbpf, and come back in a few months to check out the results of their work: digitized content preserved in the American Archive of Public Broadcasting from our collaborating host organizations WUNCKOPNOklahoma Educational Television AuthorityGeorgia Public Broadcasting, and the Center for Asian American Media as well as documentation created to support ongoing audio and video preservation education at the University of MissouriUniversity of OklahomaClayton State UniversityUniversity of North Carolina at Chapel Hill, and San Jose State University.

 

Resources Roundup: AAPB Presentations from 2017 AMIA Conference

DRq7ymbVwAE8zFi

Earlier this month the American Archive of Public Broadcasting staff hosted several workshops at the 2017 Association of Moving Image Archivists (AMIA) conference in New Orleans. Their presentations on workflows, crowdsourcing, and best copyright practices are now available online! Be sure to also check out AMIA’s YouTube channel for recorded sessions.

THURSDAY, November 30th

  • PBCore Advisory Sub-Committee Meeting
    Rebecca Fraimow reported on general activities of the Sub-Committee and the PBCore Development and Training Project. The following current activities were presented:

PBCore Cataloging Tool (Linda Tadic)
PBCore MediaInfo updates (Dave Rice)
ProTrack integration (Rebecca Fraimow)
Updated CSV templates (Sadie Roosa)
PBCore crosswalks (Rebecca Fraimow and Sadie Roosa)

FRIDAY, Dec 1st

Archives that hold A/V materials are at a critical point, with many cultural heritage institutions needing to take immediate action to safeguard at-risk media formats before the content they contain is lost forever. Yet, many in the cultural heritage communities do not have sufficient education and training in how to handle the special needs that A/V archive materials present. In the summer of 2015, a handful of archive educators and students formed a pan-institutional group to help foster “educational opportunities in audiovisual archiving for those engaged in the cultural heritage sector.” The AV Competency Framework Working Group is developing a set of competencies for audiovisual archive training of students in graduate-level education programs and in continuing education settings. In this panel, core members of the working group will discuss the main goals of the project and the progress that has been made on it thus far.

Born-Digital audiovisual files continue to present a conundrum to archivists in the field today: should they be accepted as-is, transcoded, or migrated? Is transcoding to a recommended preservation format always worth the potential extra storage space and staff time? If so, what are the ideal target specifications? In this presentation, individuals working closely with born-digital audiovisual content from the University of North Carolina, WGBH, and the American Folklife Center at the Library of Conference will present their own use cases involving collections processing practices, from “best practice” to the practical reality of “good enough”. These use cases will highlight situations wherein video quality, subject matter, file size and stakeholder expectations end up playing important roles in directing the steps taken for preservation. From these experiences, the panel will put forth suggestions for tiered preservation decision making, recognizing that not all files should necessarily be treated alike.

  • Crowdsourcing Anecdotes

How does the public play a role in making historical AV content accessible? The American Archive of Public Broadcasting has launched two games that engage the public in transcribing and describing 70+ years of audio and visual content comprising more than 50,000 hours.

 THE TOOLS: 

(Speech-to-Text Transcript Correction) FIX IT is an online game that allows the public to identify and correct errors in our machine-generated transcripts. FIX IT players have exclusive access to historical content and long-lost interviews from stations across the country.

AAPB KALDI is a tool and profile for speech-to-text transcription of video and audio, released by the Pop Up Archive and made available on Github at github.com/WGBH/american-archive-kaldi.

(Program Credits Cataloging) ROLL THE CREDITS is a game that allows the public to identify and transcribe information about the text that appears on the screen in so many television broadcasts. ROLL THE CREDITS asks users to collect this valuable information and classify it into categories that can be added to the AAPB catalog. To accomplish this goal, we’ve extracted frames from uncataloged video files and are asking for help to transcribe the important information contained in each frame.

20171201_182116.jpg

SATURDAY, Dec 2nd

Digitized collections often remain almost as inaccessible as they were on their original analog carriers, primarily due to institutional concerns about copyright infringement and privacy. The American Archive of Public Broadcasting has taken steps to overcome these challenges, making available online more than 22,000 historic programs with zero take-down notices since the 2015 launch. This copyright session will highlight practical and successful strategies for making collections available online. The panel will share strategies for: 1) developing template forms with standard terms to maximize use and access, 2) developing a rights assessment framework with limited resources (an institutional “Bucket Policy”), 3) providing limited access to remote researchers for content not available in the Online Reading Room, and 4) promoting access through online crowdsourcing initiatives.

20171202_101425.jpg

The American Archive of Public Broadcasting seeks to preserve and make accessible significant historical public media content, and to coordinate a national effort to save at-risk public media recordings. In the four years since WGBH and the Library of Congress began stewardship of the project, significant steps have been taken towards accomplishing these goals. The effort has inspired workflows that function constructively, beginning with preservation at local stations and building to national accessibility on the AAPB. Archivists from two contributing public broadcasters will present their institutions’ local preservation and access workflows. Representatives from WGBH and the Library of Congress will discuss collaborating with contributors and the AAPB’s digital preservation and access workflows. By sharing their institutions’ roles and how collaborators participate, the speakers will present a full picture of the AAPB’s constructive inter-institutional work. Attendees will gain knowledge of practical workflows that facilitate both local and national AV preservation and access.

As an increasing number of audiovisual formats become obsolete and the available hours remaining on deteriorating playback machines decrease, it is essential for institutions to digitize their AV holdings to ensure long-term preservation and access. With an estimated hundreds of millions of items to digitize, it is impractical, even impossible, that institutions would be able to perform all of this work in-house before time runs out.  While this can seem like a daunting process, why learn the hard way when you can benefit from the experiences of others? From those embarking on their first outsourced AV digitization project to those who have completed successful projects but are looking for ways to refine and scale up their process, everyone has something to learn from these speakers about managing AV digitization projects from start to finish.

How do you bring together a collection of broadcast materials scattered in various geographical locations across the country? National Education Television (NET), the precursor to PBS, distributed programs nationally to educational television stations from 1954-1972. Although this collection is tied together through provenance, it presents a challenge to processing due to differing approaches in descriptive practices across many repositories over many years. By aggregating inventories into one catalog and describing titles more fully, the NET Collection Catalog will help institutions holding these materials make informed preservation decisions. By its conclusion, AAPB will publish an online list of NET titles annotated with relevant descriptive information culled from NET textual records that will greatly improve discoverability of NET materials for archivists, scholars, and the general public. Examples of specific cataloging issues, including contradictory metadata documentation and legacy records, inconsistent titling practices, and the existence of international version will be explored.

download.jpg

ABOUT THE AAPB

The American Archive of Public Broadcasting (AAPB) is a collaboration between the Library of Congress and the WGBH Educational Foundation to coordinate a national effort to preserve at-risk public media before its content is lost to posterity and provide a central web portal for access to the unique programming that public stations have aired over the past 70 years. To date, over 50,000 hours of television and radio programming contributed by more than 100 public media organizations and archives across the United States have been digitized for long-term preservation and access. The entire collection is available on location at WGBH and the Library of Congress, and almost 25,000 programs are available online at americanarchive.org.

Upcoming Webinar: Building AAPB Participation into Digitization Grant Proposals

cropped-aapb_logo_color_1line7.png

Building AAPB Participation into Digitization Grant Proposals: Requirements, Recommendations and Workflows

Tuesday, December 12, 2017
12:00pm ET

Webinar Registration form: https://goo.gl/forms/lWWU5GgFkv09bNFi2
Direct meeting URL: http://wgbh1.adobeconnect.com/aapb_grant-proposals-1/

Curious about getting involved in the American Archive of Public Broadcasting (AAPB)?

Seeking information about the workflows and requirements for contributing digitized content and/or metadata to the AAPB?

Writing a grant proposal and want to explore collaborating with the AAPB to preserve copies of your digitized collections and/or provide an access point to your collections through the AAPB metadata portal?

Then this webinar is for you!

On Tuesday, December 12, 2017 at 12:00pm ET, the AAPB will host a webinar focused on grant writing for digitization and subsequent contribution of digital files and metadata to the AAPB.

By the end of this webinar, participants will gain an understanding of:

  • AAPB’s background and infrastructure,
  • how contributing to the AAPB could benefit your collection
  • steps to becoming an AAPB contributor,
  • metadata and digital file format requirements and recommendations,
  • delivery procedures, and
  • other workflows and considerations for contributing digital files and/or metadata to the AAPB.
  • the value of your collection as part of a national collection and how to express that in a proposal

Attendees will also receive advice on how to incorporate AAPB contribution into their CLIR Recordings at Risk (applications due February 9, 2018!), CLIR Digitizing Hidden Collections, or other grant proposal timelines and work plans.

Fill out this brief form to receive info about future webinars and to receive a webinar meeting invitation sent to your calendar: https://goo.gl/forms/lWWU5GgFkv09bNFi2

Anyone can join the webinar at this URL: http://wgbh1.adobeconnect.com/aapb_grant-proposals-1/

This webinar and future AAPB webinars are generously funded by The Andrew W. Mellon Foundation.

The American Archive of Public Broadcasting (AAPB) is a collaboration between the Library of Congress and the WGBH Educational Foundation to coordinate a national effort to preserve at-risk public media before its content is lost to posterity and provide a central web portal for access to the unique programming that public stations have aired over the past 60 years. To date, over 50,000 hours of television and radio programming contributed by more than 100 public media organizations and archives across the United States have been digitized for long-term preservation and access. The entire collection is available on location at the Library of Congress and WGBH, and almost 25,000 programs are available online at americanarchive.org.

“Dockerized” Kaldi Speech-to-Text Tool

At the AAPB “Crowdsourcing Anecdotes” meeting last Friday at the Association of Moving Image Archivists conference, I talked about a free “Dockerized” build of Kaldi made by Stephen McLaughlin, PHD student at UT Austin School of Information. I thought I would follow up on my introduction to it there by providing links to these resources, instructions for setting it up, and some anecdotes about using it. First, the best resource for this Docker Kaldi and Stephen’s work is here in the HiPSTAS Github: https://github.com/hipstas/kaldi-pop-up-archive. It also has detailed information for setting up and running the Docker Kaldi.

I confess that I don’t know much about computer programming and engineering besides what I need to get my work done. I am an archivist and I eagerly continue to gain more computer skills, but some of my terminology here might be kinda wrong or unclear. Anyways, Kaldi is a free speech-to-text tool that interprets audio recordings and outputs timestamped JSON and text files. This “Dockerized” Kaldi allows you to easily get a version of Kaldi running on pretty much any reasonably powerful computer. The recommended minimum is at least 6gb of RAM, and I’m not sure about the CPU. The more of both the better, I’m sure.

The Docker platform provides a framework to easily download and set up a computer environment in which Kaldi can run. Kaldi is pretty complicated, but Stephen’s Docker image (https://hub.docker.com/r/hipstas/kaldi-pop-up-archive) helps us all bypass setting up Kaldi. As a bonus, it comes set up with the language model that PopUp Archive created as part of our IMLS grant (link here) with them and HiPSTAS. They trained the model using AAPB recordings. Kaldi needs a trained language model dataset to interpret audio data put through the system. Because this build of Kaldi uses the PopUp Archive model, it is already trained for American English.

I set up my Docker on my Mac laptop, so the rest of the tutorial will focus on that system, but the GitHub has information for Windows or Linux and those are not very different. By the way, these instructions will probably be really easy for people that are used to interacting with tools in the command line, but I am going to write this post as if the reader hasn’t done that much. I will also note that while this build of Kaldi is really exciting and potentially useful, especially given all the fighting I’ve done with these kinds of systems in my career, I didn’t test it thoroughly because it is only Stephen’s experiment complimenting the grant project. I’d love to get feedback on issues you might encounter! Also I’ve got to thank Stephen and HiPSTAS!! THANK YOU Stephen!!

SET UP AND USE:

The first step is to download Docker (https://www.docker.com/). You then need to go into Docker’s preferences, under Advanced, and make sure that Docker has access to at least 6gb of RAM. Add more if you’d like.

Screen Shot 2017-12-04 at 12.51.04 PM.png
Give Docker more power!

Then navigate to the Terminal and pull Stephen’s Docker image for Kaldi. The command is “docker pull -a hipstas/kaldi-pop-up-archive”. (Note: Stephen’s GitHub says that you can run the pull without options, but I got errors if I ran it without “-a”). This is a big 12gb download, so go do something else while it finishes. I ate some Thanksgiving leftovers.

When everything is finished downloading, set up the image by running the command “docker run -it –name kaldi_pua –volume ~/Desktop/audio_in/:/audio_in/ hipstas/kaldi-pop-up-archive:v1”. This starts the Kaldi Docker image and creates a new folder on your desktop where you can add media files you want to run through Kaldi. This is also the place where Kaldi will write the output. Add some media to the folder BUT NOTE: the filenames cannot have spaces or uncommon characters or Kaldi will fail. My test of this setup ran well on some short mp4s. Also, your Terminal will now be controlling the Docker image, so your command line prompt will look different than it did, and you won’t be “in” your computer’s file system until you exit the Docker image.

Screen Shot 2017-12-04 at 2.06.49 PM.png

Now you need to download the script that initiates the Kaldi process. The command to download it is “wget https://raw.githubusercontent.com/hipstas/kaldi-pop-up-archive/master/setup.sh”. Once that is downloaded to the audio_in folder (and you’ve added media to the same folder) you can run a batch by executing the command “sh ./setup.sh”.

Kaldi will run through a batch, and a ton of text will continue to roll through your Terminal. Don’t be afraid that it is taking forever. Kaldi is meant to run on very powerful computers, and running it this way is slow. I tested on a 30 minute recording, and it took 2.5 hrs to process. It will go faster the more computing power you assign permission for Docker to use, but it is reasonable to assume that on most computers the time to process will be around 5 times the recording length.

Screen Shot 2017-12-04 at 1.54.55 PM.png
Picture of Kaldi doing its thing

The setup script converts wav, mp3, and mp4 to a 16khz broadcast WAV, which is the input that Kaldi requires. You might need to manually convert your media to broadcast WAV if the setup script doesn’t work. I started out by test a broadcast WAV that I made myself with FFmpeg, but Kaldi and/or the setup script didn’t like it. I didn’t resolve that problem because the Kaldi image runs fine on media that it converts itself, so that saves me the trouble anyways.

When Kaldi is done processing, the text output will be in the “audio_in” folder, in the “transcripts” folder. There will be both a JSON and txt file for every recording processed, named the same as the original media file. The quality of the output depends greatly on the original quality of the recording, and how closely the recording resembles the language model (in this case, a studio recording of individuals speaking standard American English). That said, we’ve had some pretty good results in our tests. NOTE THAT if you haven’t assigned enough power to Docker, Kaldi will fail to process, and will do so without reporting an error. The failed files will create output JSON and txt files that are blank. If you’re having trouble try adding more RAM to Docker, or checking that your media file is successfully converting to broadcast WAV.

Screen Shot 2017-12-04 at 1.54.27 PM.png

When you want to return your terminal to normal, use the command “exit” to shut down the image and return to your file system.

When you want to start the Kaldi image again to run another batch, open another session by running “docker start /kaldi_pua” and then “docker exec -it kaldi_pua bash”. You’ll then be in the Kaldi image and can run the batch with the “sh ./setup.sh” command.

I am sure that there are ways to update or modify the language model, or to use a different model, or to add different scripts to the Docker Kaldi, or to integrate it into bigger workflows. I haven’t spent much time exploring any of that, but I hope you found this post a helpful start. We’re going to keep it in mind as we build up our speech-to-text workflows, and we’ll be sure to share any developments. Happy speech-to-texting!!

AAPB NDSR Resources Roundup

In 2015, the Institute of Museum and Library Services awarded a generous grant to WGBH on behalf of the American Archive of Public Broadcasting (AAPB) to develop the AAPB National Digital Stewardship Residency (NDSR). Through the grant, we placed residents at public media organizations around the country to complete digital stewardship projects.

After a fantastic final presentation at the Society of American Archivists meeting in Portland last month, the 2016-2017 AAPB NDSR residencies have now officially drawn to a close. We wanted to share with you a complete list of the resources generated throughout the residencies, including instructional webinars, blog posts, and resources created for stations over the course of the NDSR projects.

Resources

Audiorecorder (Open-Source Audio Digitization Tool)

CUNY TV Mediamicroservices Documentation

KBOO 2-Page Recommendation Summary

KBOO Digital Preservation Policy

KBOO Current Digital Storage and Archiving Practices

KBOO Diagram for Current Digital Program Production Practices

PBCore-Based Data Model for KBOO Analog Audio Assets

Workflow for Open-Reel Preservation at KBOO

KBOO Digital Audio Guidelines and Procedures

Recommended Next Steps for Developing an Integrated Searchable Database of Born-Digital and Analog Audio at KBOO

Louisiana Public Broadcasting Digital Preservation Plan

WHUT Naming Conventions for Local Programming

Wisconsin Public Television Microsoft Access Database to PBCore Crosswalk

Wisconsin Public Television AMS Workflows Documentation

Wisconsin Public Television Digitization Workflows Chart

Wisconsin Public Television Proposal for New Metadata Database

Resident Webinars

Challenges of Removable Media in Digital Preservation,” by Eddy Colloton (slides)

Demystifying FFmpeg/FFprobe,” by Andrew Weaver (slides)

Intro to Data Manipulation with Python CSV,” by Adam Lott (slides)

Through the Trapdoor: Metadata and Disambiguation in Fanfiction,” by Kate McManus (slides)

ResourceSpace for Audiovisual Archiving,” by Selena Chau (slides) (Demo videos: 1, 2, 3, 4)

Whats, Whys, and How Tos of Web Archiving,” by Lorena Ramírez-López (slides) (transcript)

Other Webinars

“Metadata: Storage, Modeling and Quality,” by Kara Van Malssen, Partner & Senior Consultant at AVPreserve (slides only)

Public Media Production Workflows,” by Leah Weisse, WGBH Digital Archive Manager/Production Archival Compliance Manager (slides)

Imposter Syndrome” by Jen LaBarbera, Head Archivist at Lambda Archives of San Diego, and Dinah Handel, Mass Digitization Coordinator at the NYPL (slides)

Preservation and Access: Digital Audio,” by Erica Titkemeyer, Project Director and AV Conservator at the Southern Folklife Collection (slides)

Troubleshooting Digital Preservation,” by Shira Peltzman, Digital Archivist at UCLA Library (slides)

Studs Terkel Radio Archive: Tips and Tricks for Sharing Great Audio,” by Grace Radkins, Digital Content Librarian at Studs Terkel Radio Library (slides)

From Theory to Action: Digital Preservation Tools and Strategies,” by Danielle Spalenka, Project Director of the Digital POWRR Project (slides)

Resident Blog Posts

Digital Stewardship at KBOO Community Radio,” Selena Chau (8/9/16)

Metadata Practices at Minnesota Public Radio,” Kate McManus (8/15/16)

NDSA, data wrangling, and KBOO treasures,” Selena Chau (8/30/16)

Minnesota Books and Authors,” Kate McManus (9/23/16)

Snapshot from the IASA Conference: Thoughts on the 2nd Day,” Eddy Colloton (9/29/16)

Who just md5deep-ed and redirected all them checksums to a .csv file? This gal,” Lorena Ramírez-López (10/6/16)

IASA Day 1 and Voice to Text Recognition,” Selena Chau (10/11/16)

IASA – Remixed,” Kate McManus (10/12/16)

Learning GitHub (or, if I can do it, you can too!)” Andrew Weaver (10/13/16)
Home Movie Day,” Eddy Colloton (10/15/16)

Snakes in the Archive,” Adam Lott (10/20/16)

Vietnam, Oral Histories, and the WYSO Archives Digital Humanities Symposium,” Tressa Graves (11/7/16)

Archives in Conversation (A Glimpse into the Minnesota Archives Symposium, 2016),” Kate McManus (11/15/16)

Inside the WHUT video library clean-up – part 1: SpaceSaver,” Lorena Ramírez-López (11/21/16)

Is there something that does it all?: Choosing a metadata management system,” Selena Chau (11/22/16)

Inside the WHUT video library clean-up – part 2: lots of manual labor,” Lorena Ramírez-López (12/20/16)

Just Ask For Help Already!” Eddy Colloton (12/22/16)

Playing with Pandas: CSV metadata transformations,” Selena Chau (1/4/17)

MPR50,” Kate McManus (2/8/17)

Before & after XML to PBCore in ResourceSpace,” Selena Chau (2/9/17)

Advocating for Archives in a Production Environment,” Eddy Colloton (2/27/17)

Louisiana Public Broadcasting Digital Preservation Plan,” Eddy Colloton (3/6/17)

Moving Beyond the Allegory of the Lone Digital Archivist (& my day of Windows scripting at KBOO,” Selena Chau (3/16/17)

Save the Data!” Kate McManus (3/16/17)

Professional Development Time Project: Audiorecorder,” Andrew Weaver (3/27/17)

Library Technology Conference,” Kate McManus (3/29/17)

Reporting from PNW: Online Northwest Conference,” Selena Chau (4/13/17)

Adventures in Perceptual Hashing,” Andrew Weaver (4/20/17)

Trying New Things: Meditations on NDSR from the Symposium in DC,” Kate McManus (5/3/17)

Filmed Immersion Week Sessions

Why Archive Public Media

The History of Public Media and the AAPB

Mastering Project Management

Growing Your Professional Profile

Negotiating at Work

Think Like a Computer

Get To Know Your Audiovisual Media 

Many of these resources can also be found on the American Archive of Public Broadcasting Wiki, created by the residents for their collaborative final project.