ODH Lightning Rounds 2018


[Thunder] Patrick: So, hi. My name
is Patrick Murray John. My project is integration
between Omeka S and ORCID. ORCID is maybe not as familiar, so I want to start with
what it is and what it is not. It is not an I.D. system
for Orcs. It is also not about
the lovely flowers. For those, please walk
a couple blocks up the street
to the U.S. Botanical Gardens. It is a lovely visit.
You will not regret it. No. ORCID is Open Researcher
and Contributor I.D.s. It is a project to get every
scholar, researcher, contributor to first have a set, stable,
machine-readable I.D.
for them to use. And then include information
about your work,
your background, and especially the grants
that you have received. This is much more popular
among the scientists than
among humanists right now, but there’s no restriction, that it’s all about
the scientists. So that’s ORCID. Omeka S. I’m guessing most of you
are familiar with at least
Omeka Classic, popular web publishing
for glams. Omeka S is a complete
rewrite of Omeka that has an additional emphasis
on institutional needs, especially we are trying to make
it talk much better with things
like institutional repositories, your Fedoras, your D spaces. So one of the things
that we are anticipating is that as more and more content
goes into those I.R.s that has an ORCID I.D.
attached to it. Oh, then if we’re going
to play with those I.R.s
and that data need, then Omeka S has to sort of know
how to handle an ORCID I.D. And by the way, I know that
ORCID I.D. sounds a little bit
like an ATM machine. They’ve coped with that,
so have I. Yikes. So the mission is
to, as Omeka S talks better with
all kinds of different systems, be able to also have it talk
to as many things as possible to support,
to facilitate researchers. [Bell rings] Audience: [Applause] Jeff: Hi. Jeff Fleischman,
Louisiana State University. In order to stay
within 3 minutes I must read, so I apologize. The [indistinct] project is a collaboration
between humanity scholars in the fields of history,
literature, architecture,
and performance studies and researchers in
computer science and engineering who specialize
in the reconstitution
of historical patrimony and a digitally mediated
simulation of social
interaction. The project acronym sums up
many of our ambitions: virtual early modern spectacles in publics active
and collaborative environment. The short pitch is that
we are developing a video game based on the marginal theaters
that flourished at Paris fairs
in the early 18th century, a view of which can be seen
in this miniature
by [indistinct]. Yet this is also
a vast simplification not least
because of our critical stance with respect
to the conventions of
the commercial gaming industry as we seek to harness
the widespread appeal
of manipulating digital avatars within richly evocative
sensory environments to explore questions related
to scholarly communication, the ethical use of digital media
in the reconstitution of
historical artifacts, and the performance
of sociability both in the Enlightenment
and in the present age. There are multiple objects
of restitution here. I’ll first address
the question of behavior, which is central to both the way
in which we seek to leverage
the idea of video games and the historical stakes
for the public theater
in 18th century France. Spectators at the fair
were extremely active, and this is the role
that we are interesting
in having players explore. To accomplish this, we are
working with an A.I.-driven
social physics engine whose rules are derived from
literary and historical texts
that detail the interactions between people of different
social stations in France
at this time. Over the course
of multiple campaigns, players will navigate their way
through different social spaces
of various theaters, charting a progress that will be
unique for each play through. The emergent nature
of this result and the ability
to track which interactions by which kind of avatars
are most or least successful allows researchers
a unique perspective on
the sociability of the past and on how digital environments
corroborate, contradict, and enrich our understanding
of existing archives. As there are no remaining
physical structures
of fair theaters, our architectural restitution
will necessarily proceed
from incomplete sources, including paintings
and written accounts. Here a miniature
of a marionette theater provides a wealth of details
about the ways that popular
entertainments were consumed, but also about the interior
of the fair itself, which was one of the largest
enclosed spaces in Europe
at the time. One of our
signal design challenges is to ensure that the depth
of our computer renderings, less than a sense
of an illusory 3-D space than in a sense of endowing them
with the history and the context that allows users to see
beneath the surface of images that can otherwise appear
as assertions of truth whose certainty is
beyond our knowledge. This can be seen in the initial
work done by team member
Paul Francoise, which illustrates competing
theories of one theater’s
physical disposition, and this is the previous slide,
rendered. This speaks to another
fundamental goal
of this project, which is to rethink the way
that we communicate scholarship
and interdisciplinary projects. The density of information
that can be contained
in a computer model that implicates player
performance as a key element
of the research process makes this an exciting and
potentially very appealing way to get our work into the hands
of students and scholars. Thank you. Audience: [Applause] Roopika: Now you get to hear
two academics trying to talk
in 3 minutes. I’m Roopika Risam,
and this is my colleague
Susan Edwards. Building on our work designing
a digital humanities program
at Salem State University, our project, networking
the regional comprehensives, is developing a digital
humanities infrastructure to share resources between
universities like ours, who are facing similar
challenges when launching
a digital humanities programs, which are lack of funding,
lack of institutional support
for the liberal arts, and an underserved student
population without access
to time or resources. Susan: The project
has two components: a survey of digital humanities
practitioners at other regional
comprehensives and a summer 2018 workshop bringing together
regional comprehensive
digital humanities practitioners to design a network for
communication and collaboration
on future initiatives. Roopika: Our survey
is currently underway, and we’re getting
some really useful results on the models of
digital humanities
that are out there at regional
comprehensive universities. We’re finding that there are
3 general models. The first is the lone wolf model
where a single faculty member
or a single librarian, but more typically
a single faculty member, undertakes every dimension
of digital humanities project
on their own without support
or without university partners. The second is
a librarian collective model where we often have a few
librarians working together often without
institutional mandate, and usually
on digitization projects and sometimes on research
projects for faculty members. And the third is
the proto-center model, which features cross-unit
collaboration often between
faculty and librarians but with limited
financial support
or institutional mandate. But what we’re seeing in these
results is that every one who’s
undertaking digital humanities at these kinds of universities
are doing so in similar ways, and without support from others
or from their institutions, so they’re constantly
reinventing the wheel, and they’re coming to
the same conclusions. Susan: Our project is
well-placed to intervene
in this inefficient use of labor through the network
we are developing. Based on knowledge
from our survey, we are finalizing the program
for our summer 2018 summer
workshop. Oops. We are partners from
Wisconsin, Green Bay, Bridgewater State,
Southern Illinois, Edwardsville,
Westchester University, Kennesaw State,
Appalachian State, and the University of Pittsburgh
at Greensburg will be joining us
to share expertise on institutional successes
and failures with
DH initiatives and pedagogy. Identified points of contact
in our practices where we can collaboratively
build on each others’ work and lay out a plan
for sharing our resources and design
a sustainable network. Roopika: And if you’re at
one of these kinds
of universities, please come see us at cake. Audience: [Applause] Nadjah: Hello.
My name is Nadjah Rios, and I am the project director
of Caribbean Diaspora: Panorama of Carnival Practices. Carnival is the most celebrated
of all cultural traditions in the Caribbean
and the Americas. Grounded in specific localities,
carnival has been theorized as providing a space to question
issues related to processes of negotiation, accommodation,
and resistance to power. Core in theories
of technologically
enhanced mobility and notions of place
now see this practices
as happening on liquid frontiers or virtual spaces
and fluid structures. Yet the essential element of
human movement practitioners moving around carnival
as musicians, artists, singers, marching bands, masqueraders,
dancers, performance,
spectators, participant moving within and among island
as well as musical and cultural
performance tradition has not been
adequately taken into account. Caribbean Diaspora:
Panorama of Carnival Practices aims to use available
digital archives and the deployment
of digital technologies to enhance and advance
diaspora studies providing innovative pathways
for engaging the wider public who create, sustain,
and participate in Caribbean carnival
production. Accordingly, the project
engages scholars from linguistic
performance, music, history, communication, library studies,
and digital humanities in the planning of reliable
and affordable archiving,
preserving, and disseminating practices. At the present time,
we are working on the planning
stage to reuse and reinvigorate significant digital production
of the diaspora project, which is a long-term study
begun in 2006, led by the P.I.
and co-investigators, which explore migratory movement
and flows of cultural practices
in the Caribbean region, particularly between Puerto Rico
and the U.S. Virgin Islands. Also this planning grant
will allow the University
of Puerto Rico to bring in much needed
external expertise in developing
digital humanity projects
and creating digital archives. Thank you to the N.E.H.
for supporting a project
for this institution in this moment
that we are living. Thank you very much. Audience: [Applause] Jennifer: Hello.
My name is Jennifer Stertzer. I’m the director for
the Center for Digital Editing and and editor at
the Papers of George Washington both at
the University of Virginia. I’m working with my colleagues
Cathy Hajo and Erica Cavanaugh on a project that we’ve entitled
“The Development of Digital
Documentary Editing Platforms.” This comes from a need
in the community through our work at the center
and partnering with projects and through teaching
in the field, so at the Digital Humanities
Summer Institute and at the Institute for the
Editing of Historical Documents for an accessible platform
that will support all aspects
of editorial work, so from document collection
on through digital publication. Cathy and I had both
through our own projects had explored
two possible platforms, so my work with the George
Washington financial papers,
we used Drupal, and Cathy’s work
with the Jane Addams
documentary editing project, she used Omeka. So our idea was to assemble
a group of folks and bring them together
at a workshop, and so we had really 4 goals
during this grant period. The first was to bring together
folks who are currently engaged
with using both Omeka and Drupal to hear about how they went
about developing their sites, the specifications
that they drew up, their editorial methodology,
things like that. We also wanted to hear from
folks who were considering
digital publication or were at the point of learning
more about technical solutions so that we could hear about
their requirements, the things that
they might be interested in, and ways that we might need
to expand the two platforms that Cathy and I
have been working on. The third thing we wanted to do
is to develop a list
of specifications and discuss how to collaborate
on future development
and distribution of the two platforms. And of course finally,
we wanted to create
a white paper outlining all the presentations
that will take place
at the workshop, our discussions,
and our ideas for next steps. Thank you. Audience: [Applause] Todd: I’m Todd Young
from Virginia Tech, and I was director of
this project, Viral Networks, a workshop in medical history
and digital humanities. This workshop
actually took place last week. It took place
in History of Medicine Division at the National Library
of Medicine in Bethesda
on the N.I.H. campus. This photo shows
the workshop participants
in the conference room talking about the drafts
of the papers. The purpose of this workshop
was to bring together
a number of scholars into both a face-to-face and
also a virtual-type workshop. The advisory board
was a collaboration between
historians and librarians. We brought together
some experts in network thinking who opened up
the workshop session, and the real heart
of the workshop were
the 11 contributing scholars, each of whom wrote
a precirculated draft that went out
to all the participants, and then we had a discussion
during the day of the workshop. We also had a keynote speaker,
Theresa McPhail. As you can see from the tweet
from Seth Denbuck from the American
Historical Association, sitting over here,
it really was a genuine workshop in the sense that
everyone was working together
on these collaborative projects. Next step in this process
is to turn the drafts, which were appeared
in this print version, into an open-access,
flexible digital publishing mode
that meets scholarly standards and yet is easily
and lively accessible. The comments here on the top
came from one of the workshop
participants, and I think we were successful
in creating a model
of a scholarly space that allowed for
innovative visions and projects but also the potential to expand
these projects into the whole. Thank you. Audience: [Applause] Diane: Hello. I’m Diane Fallon
from York County Community
College, Wells, Maine. And my slides have
a couple of objects here. This child’s horn book
is on exhibit at the Wells
Algonquin Historical Society. It dates from the early 1700s,
and it’s in a museum that is
located in a 1682 meeting house on the site
where a meeting house
has stood since 1652 and where
Minister George Burrows
took a job in 1691 after the Wabanaki
drove Europeans out
of Portland, Maine. In 1692, perhaps when the owner
of this horn book was a toddler, authorities from Massachusetts
arrested Burrows,
took him to Salem, where he was charged
and convicted of witchcraft
and then executed. These mourning rings, on display
at the Kerry Historical
and Naval Museum, educate visitors about
18th-century death rituals, and also connect
to multiple stands of history. The fate of loyalists
during the Revolution, young William’s father
was forced into exile
and stripped of his property, and to the suffering engendered
during the Siege of Boston, as William’s mother, Elizabeth,
died of dysentery due to poor-quality food
a couple of months
after his birth. The rings also connect
to slavery and the slave trade as William’s maternal
great-grandfather
and grandfather Royal grew wealthy
from the slave trade
and connect to the present as Harvard Law School
established with funds
from the Royal family grapples with that legacy. So why am I showing you
these slides? These are tiny institutions
with miniscule budgets filled with amazing objects
and artifacts and documents
that hardly anyone gets to see, just a few, you know, 200, 1,000
visitors a year, et cetera. I work at a tiny institution operating on
a shoestring budget, and I am working with these
institutions to make history
more public. So these are the institutions
that I’m working with
in southern Maine, and we are networking,
collaborating, and eventually I’m doing some
curriculum development work
at my school, trying to develop
a digital storytelling course with the goal of expanding
the capacity for public history
in my region and also sparking interest
in the humanities and in digital humanities
in the next generation
of humanity scholars. And the project funds
professional development
workshops and developing digital
humanities expertise, scholar consultants,
Jessica Parr and Candice Cains
and myself, and there’s a needs assessment
curriculum development
and some other elements, and that’s the project. Thank you. Audience: [Applause] Woman: First,
thank you to the N.I.H. for including me
in this meeting, for funding our project. Pleased to be here
on behalf of the team, which includes
Cynthia Hudson Vitale,
my co-P.I., 3 additional visiting program
officers at the Association
of Research Libraries, Rich Johnson, Matthew Harp,
and John Patterson, and Shares co-director
Jeff Spees. I’m a program director
for strategic initiatives at the Association
of Research Libraries and the co-director of Share. Share is a set of tools
and technologies to harvest metadata
about scholarship
from open repositories. Our project, integrating digital
humanities into the web
of scholarship with Share, addresses two separate
but related problems
in DH scholarship. First is the problem
of diffusion. A DH project
has many components, often many contributors,
tools, scripts, data,
images, annotations, and these component parts
can live in different kinds
of repositories or on different websites. They may have rich descriptive
metadata or none at all, and we want to be able
to understand these
separate components as part of the same
intellectual work, which has implications for
everything from preservation
to credit. Second, the tendency of
DH projects and their
component parts to be too tightly bound
on a single website, leaving them outside
of the repository environment
altogether and their component parts
undiscoverable or potentially reusable
by other projects, adding to the perception that
DH projects are sort of always
bespoke and one-off creation. We care about these problems
as research libraries because we, as you’ve heard,
support the creation
of DH scholarship often in library-based centers
or technologies, lending our staff expertise
and certainly our collections. So it’s vitally important to us
that we also take responsibility
for this scholarship once it’s created. How do we put DH scholarship
in the library? That’s kind of what’s motivating
the project team. And what do we bring
to the party? Happy birthday, ODH. Share is a harvester. It harvests, aggregates,
and normalizes… aggregates metadata
from open repositories, normalizes that data,
and stores it, and provides
an A.P.I. to query it and parse that aggregate
data set. So through a survey
of practitioners, a workshop with experts,
practitioners, librarians,
digital library project leaders, et cetera, and a series
of focus groups coordinated
with DH centers on campus, we aim to gain insights
into scholar work flow and articulate potentially
high-impact interventions
to capture automate metadata. Generate recommendations
for supporting the creation
of DH projects to include robust metadata,
create requirements for Share
to display these assets in a way that is responsive
to community needs and design prototype
of that discovery environment. Thank you. Audience: [Applause] Elise: Good afternoon.
My name is Elise King. And my co-project director,
David Lynn, and I are both faculty members
of Baylor University. David is an associate professor
in computer science, and I’m an assistant professor
in the interior design program. Today we’re here to discuss
our project with you, Digital Floor Plan Database,
a New Method for Analyzing
Architecture. To understand our project,
it’s important to understand
the problem that inspired it: floor plans. Floor plans specifically can
provide a snapshot of family
relationships and roles, building culture,
and a human need to define
one’s cultural identity in time and space. Plans are information rich,
and among architectural drawings offer the most comprehensive
overview of a building, informing the reader about
elements such as rooms,
doors, and windows, built-ins, plumbing fixtures,
and stairs. At present, however, it is very
difficult to examine and compare
large numbers of floor plans. Most scholars are limited to
analyzing floor plans manually, either by hand or with
the addition of digital aids. As an architectural historian
and interior designer, I personally discovered the
limitations of current methods when studying the work
of Frank Lloyd Wright. Wright designed
over 1,000 structures. Many of these buildings have
multiple floors and therefore
multiple floor plans. To explore commonalities across
these floor plans, for example, size, square footage of spaces,
number of doors and windows, a historian would need to
analyze hundreds of plans
manually, calculating the square footage
of each room or other data points
individually. As a result,
floor plans, despite being
a valuable resource, are vastly underutilized
in the study of
the built environment. Our solution is to develop
a system that can read, store,
and analyze floor plans. We’ve already developed
a prototype, and with N.E.H. funding, we’re
looking forward to expanding it and to include open-source
floor plan recognition software. The prototype, for example,
allows you to manually enter
your information: house name, square footage,
construction dates, architects,
and store it in a database. David Lynn will now explain how
we will use our N.E.H. funding to expand our prototype
into the next phase. David: Thank you. So as mentioned,
we have a lot of floor plans, and we believe
that they’re data rich and we want to provide
a means for us to analyze
and also enter the data. So as you see on the screen,
we are building some tools that, once the data is there,
you can conduct some
[indistinct] studies, queries, and other related
extraction of information. However, the process of actually
entering the floor plans
right now is manual, very time consuming. There are probably better things
to do with the time, so we want to utilize
our computational capabilities to basically scan the floor plan
into a jpeg file and try to extract all
the information automatically. Once we do that, we also want
to annotate it, to allow people
to annotate the data so to ensure high level
of accuracy. And once we do that, we are
hoping the whole process
can be streamlined and it will be very beneficial
to the community at large. Thank you. Audience: [Applause] Tom: Hi. I’m Tom Hughes. I’m the associate director
for the Frank-Ratchye Studio
for Creative Inquiry. at Carnegie Mellon University. We’re partnering
with the Carnegie Museum of Art to work on a joint investigation
to analyze and improve
the data base and create new multimedia
interactive material for the Teenie Harris
Photo Archive. The Teenie Harris archive
at the Carnegie Museum of Art is an archive of approximately
80,000 digitized photographs
and negatives taken between 1935 and 1975
by Charles “Teenie” Harris, a photographer
and photojournalist
for the “Pittsburgh Courier.” The archive represents
a detailed and intimate
visual record of the black urban experience. Now, the challenge for
the Carnegie Museum of Art lies in the fact that Harris
was functionally illiterate
during his life and as a result, there is very
little pre-existing metadata
and mark-up for his images. So we’re partnering with the
museum to improve the quality
of the metadata that annotates this data base
and also present public facing, interactive access points
for the data base. We’re doing this with
advanced forms of research-grade
open-source machine learning and computer vision techniques. This first slide represents
one of our approaches supporting automatic mark-up
and analysis using something
you might expect– face detection
and facial recognition. We’re using a library
called Open Face, developed at our own university. It will allow us to
not only detect faces, but reliably know which faces
are similar enough to potentially assert
that they are the same
across the entire data base. This is not meant to be
a fully automated system, but as an assistive tool
for archivists. We’re also using this data
to understand Teenie Harris’
compositional strategies. You can see some of
the compositional lines
represented in this slide. Now you can get a great deal
with automatic mark-up, but we don’t have absolute faith
in automatic mark-up. The real point of this project
is that we want to support
the manual annotation and the work done by archivists. So to assist archivists
during interviews with people from the time period
that the photographs were taken, we’re producing a tool that
shows clusters of images
from the data base at the same time
to a person. We automatically compute these
clusters using convolutional
neural networks and clustering techniques
like [indistinct]. We can then create
an interactive interface for
presentation during interviews. For example, this cluster
of car crashes, which was something that Harris
would have covered during his beat
at the “Pittsburgh Courier.” We feel that this could
drastically improve the efficiency of
the interview process, then we could gather together
all of the, just for example, you could gather together
all the weddings, soldiers, or other images that are
reasonably similar. Finally we’re using some other
techniques to create new forms
of interactive presentation. The purpose here
is less for mark-up, and although mark-up is being
produced by this technique, this is more a benefit
to how the data is presented
to the public. In this case, we’re using
something called Automatic
Semantic Segmentation, which segments an image
semantically by the objects
and things that are in it. I’m going to skip ahead to say
that at a further stage
of this project, we hope to integrate this
with the International Image
Interoperability framework so that we can create
reproducible results that we can share with other
archives, libraries,
and museums. Audience: [Applause] Mark: Hello.
My name is Mark Souther
from Cleveland State University. Meshack: And I’m Meshack Owino,
also from Cleveland State
University. Our project Curating
East Africa, a platform on process for
location-based storytelling
in developing world, has its origin
in the Curatescape
framework developed at Cleveland State University. In 2014, we tried to imagine
how Curatescape might function
in developing world contexts. Under an N.E.H. digital
humanities start-up grant
called Curating Kisumu, we set up a standard
Omeka-based Curatescape site
called [indistinct], and worked with our partners
at Maseno University in Kenya to develop narrative content
and study how the platform
worked in their setting. What emerged was a need
for a mobile fast website
with light data footprint that could be sustainable
in a part of the world where
cost is a real concern. Mark: Accordingly we’ve built
Curatescape for WordPress, a single plug-in coupled
with a recommended theme that is optimized
for developing world context. Our plug-in greatly simplifies
the work flow when compared to Curatescape
for Omeka. A typical story that required
5 tabs, page reloads, and saves can now be created
with just one of each. Using WordPress simplifies
training, lets us automate
image size optimization, consumes much less data,
and lets us leverage
one-click updates in a globally strong
developer community. Although users compare
the plug-in with any
WordPress theme, our theme delivers a 70% smaller
typical page size than
in the Omeka theme. Meshack: Our content visual
process continues from our
earlier grant phase. Maseno and CSU students
worked together across
8 time zone divide. Maseno students focused on
primary research in Kisumu while Cleveland State University
students focused on scholarly
secondary socials that are difficult or impossible
to access at Maseno. On our recent trip to Kisumu,
our team met with many community
stakeholders to build interest
in the project. The results include cooperative
arguments with the Kisumu
Country Ministry of Education, National Museum of Kenya,
Western Region, and Kisumu Museum along with many contacts
who stand ready to work with
our students. Mark: Going forward, we’ll
publish additional content, engage our evaluators in Kenya
and Tanzania, make refinements,
and release the new tool set
with full documentation. In closing, our broad goal
is to build digital
humanity’s capacity in an East African institution while contributing a sustainable
tool set and collaborative model for DH practice
that may be adopted
across the developing world. That is our presentation. We thank N.E.H. and thank
all of you for being here today. Audience: [Applause] Andy: I’m Andy Weislogel
from the Johnson Museum of Art
at Cornell University. With thanks to N.E.H. and on
behalf of–oops. There we go. ….my director,
co-director Rick Johnson, and our student team, it’s my pleasure to introduce
the Wire Project. This project merges
humanities research
with engineering expertise using the computer to broaden
access to the Rembrandt
Watermark Scholarship begun by Erik Hinterding
of Amsterdam’s Rijksmuseum. The iconic Rembrandt
self-portrait you see here is one of thousands of prints
Rembrandt made from the more than 300 copper
etching plates he created. These sheets often reveal
watermarks, papermakers trademarks
imparted by the wires
of the paper mold. For example, here we see
the webbed wings of a dragon
next to Rembrandt’s face. Each different watermark denotes
a distinct batch of paper and thus a particular moment
in Rembrandt’s studio practice. Watermarks convey
crucial information such as when Rembrandt made
artistic changes to his plates when he printed larger editions, for example,
to stave off bankruptcy, and whether specific impressions
were printed by Rembrandt
himself or after his death. The information is useful
to a wide range of curators, scholars, collectors,
and students, but the scholarship is complex
and unavailable digitally, and many of the watermarks
look alike, so identification is tricky. On the left, we see x-rays
of two Rembrandt papers with a Bishop’s staff
and a crown shield. To tell them apart, we look for
concrete visual differences. Here, where the bottom point
of the shield meets
the cross below, the left one lines up,
as you can see, while the right one offsets. Wire Project students
build these differences
into branching decision trees with yes/no questions
that steer a user quickly
to their watermark match. They have developed 14 branches
since fall 2015, encompassing about half
of the 500 Rembrandt
watermark variations known. The project is gradually
building these branches into an interactive
identification tool on GitHub that guides the user with
questions and enhanced images. The solution page provides
dating information about
that particular paper. In this case, Rembrandt printed
15 different plates
on this Swiss paper when he was 28. In this level two grant,
we aim to complete an online decision tree for all
50 Rembrandt watermark types. We will also offer
a student workshop
to complete further branches, develop approaches to add
new watermarks to the tree, present the Wire Tree
as proof of concept for
similar research questions including other artists, and lay the groundwork
for a database of Rembrandt
watermarks. Finally, the development
of the Wire Tool is leading
to new insights. We’re publishing
new watermarks… and we’re collaborating to
develop a portable imaging kit to harvest more paper data
on Rembrandt prints. Thank you. Audience: [Applause] Suzanne: Hello.
I’m Suzanne Churchill. Linda: And I’m Linda Kinnahan. Suzanne: And this is Mina Loy, who was an artist, a poet,
a feminist, an inventor,
and an entrepreneur. She consorted with futurism,
Dada, and surrealism and lived in London, Paris,
Berlin, Florence, Rome, New
York, Buenos Aires, and Paris from the 1910s to the 1950s. Her experimental writing
garnered attention
for its bold feminism and its innovative forms, and in this picture here,
she is pictured in Paris in 1921 with other members
of the avant-garde. The historical avant-garde,
literally the advance guard
of an army, refers to early 20th-century
European predominantly white,
male-dominated art movements that oppose mainstream values
and sought to challenge
and shock their audiences. This photographs,
as recently reproduced in the “New York Times
Magazine,” typifies Loy’s position
in the avant-garde. She’s the central animating,
illuminating presence, but she’s also
a ghostly spectral figure, mentioned only in the caption
and never in the article itself. In this way, Mina Loy
is representative of women
in the historical avant-garde. They played major roles,
but they’re often relegated
to supporting parts in the lore or left out altogether. Our N.E.H.-funded project
aims to alter this narrative. Mina Loy, Navigating
the Avant-Garde is a scholarly website
that charts Loy’s career via multimedia user-directed
narratives and visualizations. Our team includes the two of us
as well as Susan Rosenbaum
as principal architects, collaborating with instructional
technologists, librarians, and
students from 3 institutions: Davidson College,
Duquesne University,
and the University of Georgia. Linda: Our goals are to access
and interpret Loy’s work, much of which is buried in
archives or private collections, to transform close reading
to make it more interactive
and contextualized, to develop a feminist theory
of the avant-garde that accounts for women
and people of color, and to involve students
as equal partners in public humanities research, test new processes
for open-peer review, and set U.X. design standards
for digital scholarship. Feminist design
is crucial to our work. For us, that involves
breaking down hierarchies,
reflecting artistic diversity, and recognizing aesthetics as
crucial to digital humanities. One example
of our feminist design is the flash mob formation
of a new feminist theory
of the avant-garde. Taking our cue from classical
ballet rather than warfare, we propose the term
[speaks French phrase], which means coming from
the outside or turning outward. Rather than assuming
a militant position
at the forefront of culture, women and people of color
often came from the outside
and operated on the margins. We’ll use social media to invite
contributors to submit posts
to our site which users
can then select and arrange
in their own formations. So thank you to the N.E.H. for
supporting us in this endeavor. Audience: [Applause] Jonathan:
My name is Jonathan Amiss.
I’m from Gettysburg College. I’m an anthropologist
and a linguist, and I’ve been working a long
time on endangered languages
in Mexico, mostly on lexicography, and part of that is
to collect names for
plants and animals uses, symbolic and economic uses. As I work across
different languages, I notice that this is the
nomenclature on classification
and use of plants and animals can reveal patterns of contact
and culture history among different cultures
and languages, so I’ve become
increasingly focused on that, and I realize that
there’s really no standard
for encoding that information. In other words, as different
researchers working in different
areas on lexicography or in ethnobiology
gather information, how can that information be
tagged so that it is useful, it can be discovered,
it can be annotated, and it can serve both
indigenous communities
and the research itself. So this project is both to
develop a standard for encoding
enthnobiological information and to create a data portal
where that can be accessioned
and commented on by people who are using
the portal as well as creating
their own sort of profile so they can see
their own information
as they’ve marked it up. So I’m going to focus right now
on one type of information, which demonstrates cultural
contact, linguistic contact. These are word loans. You can see here–
I have to get my glasses out,
of course. So we have Nahuatl and Totonac
communities in contact. We see [speaks native word]
is in Nahuatl, and [speaks native word]
is in Totonac. Again, that shows
cultural contact. This is probably because
the Nahuatls migrated in. We also see the word for
[speaks native word], also comes from Totonac, and that’s for sort of
a philodendron-type plant. The other type of evidence
of cultural contact,
or what’s called calcs. These are basically
lone translations, So we see the word for Camel
Spiders and Whip Scorpions. [Speaks native word]
is the word for shame, and in [speaks native word]
it’s also the shame animal. Again, that shows some sort
of cultural contact, right? You don’t normally
associate shame with this, with these types of animals. And the last are
cultural beliefs. So these are Velvet Ants,
[speaks native word], and it turns out in many
cultures across Mesoamerica,
these are associated with omens. So the question is, how do
we capture this information, how do we tag it in such a way,
how do we allow annotation
of this information, and how do we allow
discoverability of it? So that’s really the project. I’m going to show you that
there’s no system at present
to do that, and so we must create both
the standard and the content
management of the system in order to manage
this information. This is just an example of it.
I can’t get to the pointer. But you can see these are
[speaks native words]. They are words for… The Velvet Ant have
something to do with jaguars, which is not unusual
if you see that up above. In Aztec, there is no word,
and the association
of [speaks native word] with omens occurs
in the 4 [indistinct]
gang cultures and Totonac, again showing a type
of cultural contact. So, you know, the…yeah. [Bell rings] Thank you. Audience: [Applause] Brooks: Hi. I’m Brooks Hefner
from James Madison University, here on behalf of my partner
in this project, Ed Timpkey, at the University of California,
Berkeley. Our project is Circulating
American Magazines. Circulating American Magazines
aims to revolutionize the study of 20th-century
American periodicals through data
visualization tools. The interdisciplinary field
of periodical studies examines influential but also
ephemeral print culture of
the 19th and 20th centuries and has greatly benefited from
digital humanities’ efforts
in recent years, so just the digitization
of major magazine titles
in various only repositories. However, accurate figures
representing the popularity
and reach of any given periodical is
missing from periodical history. Until now, magazine history
has largely depended
on anecdotal evidence, from correspondence, memoirs,
companies’ self-reported data,
or overly aggregated figures. This is where Circulating
American Magazines comes in. Our project collects, collates,
and digitizes data that magazines submitted to
the audit bureau of circulations
between the years 1924 and 1972. Formed by advertisers, the ABC
verified major magazines’
circulation figures to help standardize
advertising rates. Semi-annual reports provided
detail issue by issue, geographical and demographic
circulation data. These reports are held
in hard-to-reach volumes, mainly in off-site collections,
at the Library of Congress
in D.C., and the Center for Research
Libraries in Chicago. Circulating American Magazines
makes data from these volumes
available for download and offers a variety
of visualizations, representing circulation
over time and space that will help scholars
and students understand the big picture
of U.S. periodical history. It will permit us
to characterize geographical differences
in magazine circulation and wrestle with
the broader cultural shifts
for the most popular titles, from “Saturday Evening Post”
to “Life” to “Reader’s Digest”
and “TV Guide.” But importantly, this
visual data lets us ask better
and more informed questions about magazine history,
locating previously
undetectable patterns, and historical trajectories. How were popular periodicals
impacted by economic downturns
like the Great Depression? To what degree did content
or design, the presence
of a popular writer, or an eye-catching cover
correspond with higher newsstand
sales of individual issues? How much impact
did the change of an editor or the lowering of a cover price
have on a magazine’s success? How did circulation patterns
of Marvel Comics and DC comics
differ across the country? We even have a brief period
of data from the mid-1940s where we can answer
the age-old question, who was more popular with
readers, Superman or Batman? The project is currently
tracking over 400 titles
across 49 years of reports held in 4 repositories. To this we are adding
advertising companies’
published circulation averages between 1869 and 1924, with nearly a half century of
detailed data and another
56 years of summary data. Combined in our visualizations,
our project aims to become
an indispensable resource for those interested
in American periodical history during the golden age
of American magazines. Thank you
and thank you to the N.E.H. Audience: [Applause] Brandon Lunsford: Hello.
My name is Brandon Lunsford. I am the archivist at
Johnson C. Smith University
in Charlotte, North Carolina. JCSU is an HPCU and the center
of the only historically
African-American neighborhood that remains in Charlotte. The others were destroyed
very quickly in the ’60s
by urban renewal or very slowly and more recently
by gentrification. That is currently also happening
quite rapidly in the West End
as well. They’re as close enough to
downtown that long-time older
black and poor residents are being priced out
by younger and wealthier
residents who are often white. The idea with our project
is to create a digital map that would delineate
and highlight this
important neighborhood and allow community residents
to tell their own histories. We received a very large photo
collection from the family of
an African-American photographer who had a studio
in the neighborhood, and we are using pictures to do
sort of plot points on the maps so we can add oral histories
that we’re recording
and transcribing, and we’re also going to other
institutions and finding out
what they have online, adding that. I don’t know how to stop
this thing from going. It seems to be… Man: [Indistinct] Brandon: I don’t know. It’s just changing. So we’re going to sort them out,
their oral histories that we
have already recorded and some that have
already been done, and so sort of some of our goals
are to reinforce the issue as
an anchor of the community and establish ourselves
as a model for other HPCUs and other small colleges
that may want to document
their community history. So confront gentrification
but not necessarily prevent it, to make others aware of the
importance of the neighborhood, to preserve
these images and stories
for future generations, and to increase digital literacy
and bring neighbors together and connect communities
regardless of age, color,
and socio-economic status. That’s it. Audience: [Applause] Britt: Good afternoon. I’m Britt Abel
from Macalester College. Amy: And I’m Amy Young
from Central College. Britt: So we have a confession. Our project was born
of a Facebook rant. A colleague of ours posted
her frustrations with
heteronormativity and beginning German textbooks. Amy chimed in and as did I. We were all keenly aware
of the disconnect between what our students
experience in their daily lives and what is depicted in
the learning materials that
we bring into our classrooms. Amy: Students didn’t like
the cost of textbooks. Faculty were frustrated
by the new editions
with minimal changes. When Britt decided to attend a
presentation on open-educational
resources, or O.E.R., Grenzenlos Deutsch was born. Britt: The result is
an open-access, no-cost, immersive curriculum
for use in the first year
of German classroom. This project aspires to provide
an inclusive curriculum containing material that
reflects the diversity of
sexuality, class, race, gender, ability, and ethnicity
of our students and material
that includes topics
related to sustainability. Amy: We’re creating a curriculum
that not only contains
sustainability content but is also produced under
sustainable circumstances. The cost to students
is in alignment with
the compensation for creators. Instead of expensive textbooks
with minimal compensations, we opted for no compensation
but cost-free access for
students and teachers. The only monetary barrier
for Grenzenlos Deutsch is
the cost of the internet, and for that reason we rely
on grants to fund the creation
of this curriculum. Thanks, N.E.H. Britt: We’re still going to go. We still got
a couple more minutes. The first phase of our project,
which was funded by
a different grant, sends Amy along with an
undergraduate from each
of our institutions to Germany and Austria. They shot photos, conducted
video and audio interviews, and gathered various materials
for the project. We sorted through that content
and began the process of
authoring the curriculum. Because we wanted to interweave
video and audio materials and interactive activities
seamlessly, we decided to build
the curriculum into a website with HTML 5
interactive activities. The content-based units
allow for flexible ordering and resist the linear structure
of a traditional print
or PDF textbook. Amy: So this summer we will be
using the bulk of
our N.E.H. funds to send a collaborative
working group to Vienna…
Vienna, Austria, by the way. Where we will be teaching our 8
members of our board of authors how to author using the tools
that we’ve chosen. And we will then finish writing
the bulk of the curriculum. We’re hoping to pilot it
in the fall. We hope that this will be a
collaborative writing process
as well and that the tools and pedagogy
will all be a productive model
for O.E.R. in the future. Britt: Thank you. Audience: [Applause] Julia: Hi, good afternoon.
I’m Julia Flanders. I’m the editor-in-chief of
“Digital Humanities Quarterly,” which I should show you. Which is an open-access
peer review journal
of digital humanities now in its 13th year
of operation and its 11th year
of publication with over 350 articles published
all encoded in TEI XML, and each a bibliography of
between 10 and 50 items roughly, so we have
a cumulative bibliography of about 5,000-10,000 items
and growing, not just because new material
is of course being published
all the time, but also because
the intellectual watershed of the field of
digital humanities is expanding. So this gives us
a practical challenge
and also an opportunity. The challenge is just how
to manage this scale as part
of an efficient work flow. The opportunity is
how to represent and expose this intellectual history
of the field in a way that can support
analysis of change over time, of differences between
disciplinary subcommunities, of geographic
and cultural differences, and of the influence
of specific research and the spread
of specific ideas. A few years ago, DHQ was
fortunate enough to receive a digital humanities
start-up grant from the ODH–
thank you– to create
a centralized bibliography
of digital humanities, and we were able to make a good
start, but we didn’t have time
to finish the work, so our current project–
thank you again, ODH- is an attempt to complete
this work, and it also incorporates
a new element of
rhetorical analysis arising from the research
interests of Gregory Palermo, graduate student
who is part of the team
with whom I wrote this proposal. Greg is a managing editor at DHQ and his work focuses
on a combination of
computational text analysis and network citation analysis. So the data at stake
is fairly straightforward. It included T.E.I. articles
that contain indexed quotations
on citations which point to the articles
and bibliography. The articles bibliography
contains references that link it in turn
to the central bibliography where data has been
regularized and coded. And at publication,
the transformation process pulls in the data
from the central bibliography to populate
the article bibliography. So the potential for analysis
here is very broad and exciting, and I’ll just highlight
a few things. First, bibliographic data on its
own with information about which
articles cite which sources can support things like
co-citation analysis and analysis of
bibliographic coupling. Taken together with our T.E.I.
encoding of the article, which represents the location
and context and citations including
specific material quoted, we can also look at where
and how citation has performed as part of the article’s
argumentative rhetoric. And for instance, where do
citations reflect extended
engagement with a single source, on the left, or ongoing varied
citation of many sources
throughout the article, for example in the middle, or a Litton review style cluster
at the very, very beginning
or varied other patterns. And coupled with metadata concerning
the author’s institutional
and geographical location, we can start to get a picture
of how different communities
within DH engage differently. And these are just
our questions. The central bibliography will
have an A.P.I. and an interface
public expiration, hopefully
some cool visualizations, so we hope people will be able
to explore a vast array of other
avenues of the search data. Thank you. Audience: [Applause] Marit: Hello. I’m Marit MacArthur from the
University of California, Davis, and rather than give
an overview of this project, I’m going to share one finding
that might illustrate the kinds
of insights it might bring us. So here, this pitch-graph shows
you what is probably… or who is probably Walt Whitman
and his intonation pattern
reading the poem “America.” Some of you
have probably heard it. You might hear it in your heads,
replay it right now. Rhetorical question: How do
you like his reading style? Does it sound ponderous,
theatrical, authoritative, old? We often have strong responses to what I call
performative speech, but it can be hard to pin down
what we are hearing. Some of you have heard of
poet voice, for instance. What is it? Is it a default neutral style
of poetry reading? Is it incantation that induces
a pleasant trance? Is it boring?
Does it induce sea sickness
as the urban dictionary implies? What are poets doing with their
voices when they use poet voice? With these tools,
we found that 4 poets
who seem to do poet voice– Louise Glick,
Natasha Trethoway,
Julianna Spar, and Michael Ryan– aren’t doing quite the same
things with their voices. One uses
a very narrow pitch range, another uses a very regular
rhythm and speaks faster
than the others, two use very slow pitch speed,
so they change the pitch of
their voice a little bit. The one thing
their voices have in common
is slow pitch acceleration, which it takes practice
to hear to differentiate compared to a group of 100
American poets that I sampled. Maybe when we hear one
of these slower tendencies,
we pigeon hole it as poet voice. What the tools on this project,
we’ll do–I hope– is to help refine our sense
of what we hear by visualizing and quantifying
data about pitch and timing
and recorded speech. This should help us trace trends
and performance styles
and their evolution in many types
of performative speech, including primarily
talking books, radio drama,
and poetry recordings. And these tools can deal
with noisy old recordings
like the Whitman. I’m working with Neil Verma
at Northwestern and Mara Mills at NYU as well as a neuro scientist
of speech perception at U.C. Davis,
Lee Miller, and a brilliant
software developer from the bleeding edge
of Silicon Valley,
Robert Oxhorn, and 18 user testers
from the U.S., Canada,
and Mexico, and Germany. And we are open to more,
so please get in touch if you do
research on speech recordings. Thank you. Audience: [Applause] Brian: Hello. I am Brian Joseph. Christopher: Christopher Brown. Brian: We are part of a team
at The Ohio State University, working on a larger project,
the Herodotus Project, and its specific subpart
that N.E.H. has funded
under the DH rubric. Historians working on ancient
times have focused largely on noteworthy individual persons
and places, and the result of that focus
is that no comprehensive
catalog exists of peoples in ancient times. It is tribes, clans,
and other groups. We are aiming to fill that gap
and to create such a catalog, and then to work with
that catalog to develop
an integrated data base of the peoples mentioned
in ancient classical sources drawing on different fields
of research for the data
about them, fields ranging from philology
and textually based work to archaeology
and even genetics. In our fist phase,
we were working to automate the identification of such
group names in ancient sources by using the computational
methodology known as named
entity recognition, and it’s for the development
of such a system that
we received our grant. Christopher
Named entity recognition is a computational machine
learning system for recognizing important pieces
of information in texts, entities that are identified,
named, e.g., persons, places,
and groups. We began working with the
existing English-based N.E.R.
and translations aided by David Smith’s work
at Perseus on Herodotus, and David Bowman’s
N.E.R. assistance, but for comprehensiveness
and accuracy, it’s better to work
in the original languages. With N.E.H. help, we are
perfecting our Latin N.E.R. and developing an N.E.R.
for Greek. What characterizes
these ancient languages is sparcity of computational
resources and a relatively
limited data set. This involves working with
diverse authors and genres, such as historiography,
poetry, and letters. This will be useful
to other scholars working
in Greek, Latin, and eventually
other ancient languages,
such as Old Church Slavonic. Brian: Our solution
to issues raised by N.E.R.
of group names in Latin is to target the needs
of the digital historian whose time is scarce
and whose primary concern
is accuracy. Thus we select sentences to
annotate for named entities which will be most informative
for the development of and the continuing progress
with our name entity
recognition model. The model’s expected accuracy
will increase as a function of how much
additional annotation
one feeds into it. We have a team
of student annotators
currently working on Latin texts annotating them for use
as the training data
for our N.E.R. system. Wet then test the efficacy of
the system on unannotated data and then allow the system
to enter into a feedback loop so that it learns from the tests
and improves its performance. We’ve reached accuracy levels
of over 95%. We see this system and general
approach as being of use
not just to our goals, but to digital historians
more generally. Thank you. Audience: [Applause] Michelle:
Hi. Last one for now. I’m Michelle Weigel
from Old Dominion University, and this is a preview of joint
work with my fellow computer
scientist Michael Nelson and our collaborators from
Columbia University Libraries, the Frick Art Reference Library
in New York. Our goal is to provide a tool
that will help archivists and
digital humanities scholars understand how a web page
changes over time. Our collaborators
work with art historians
and human rights researchers who would benefit
from this tool. This project is one of the
joint N.E.H. I.M.L.S. awards, and we were grateful for
the support from both agencies. Let’s say, for example, the
Office of Digital Humanities
web page. The internet archive
has captures going back to the
very beginning in March 2008. Since then, there have been over
500 captures of the web page. Now, Brett and others here
could probably pick out perfect examples of a small
number of individual ones
to show in a summary, but for the rest of us,
we would need to click on
each of the 500 captures to figure out which would be
the important ones. In previous work, we developed
a technique to examine only the
HTML source of these captures, or mementos, to measure
how much they changed over time. Then we can select those
that have demonstrated
the most change and generate screen shots
of them. This is much more efficient
than generating screen shots
of each individual memento and then measuring the change
in the images. In this project, we are
implementing this technique and then developing several
different ways to display
the screen shots. In all the views, clicking
on the image will play back
that memento from the archive. This first view is the animation
or slide review. You have a single image that
can be animated through time
as you see here. Or the user can swipe
their mouse over the image to control
when the image changes. Our second view
is a timeline view. It shows a mark for each memento
on a timeline to give an overview of how
that web page has been archived. The yellow lines
are the mementos
that our technique selected, and the gray lines are all of
the others that were similar
to one of the selected mementos. The timeline here is based on
timeline center from ProPublica. The interface allows you to
select different images to view, to scroll through time
and to zoom in and out
through time. The final view here shown
is the simplest, the grid view, and you can see that
the 500-plus mementoes have been summarized
in just 11 screen shots,
about one per year. By the end
of this 18-month project
we plan to deploy a web service so that users can input
a web page URL and see how it’s changed
over time. We also plan
to develop an extension
for the Wayback machine, which drives playback
for many web archives, and this allows
the archives themselves
to provide this service. We also plan an in-bed service
so website authors can put these
in their own websites. So for updates, you can check
out our Twitter website DL. Thank you. Audience: [Applause] Ellen: I’m Ellen Rocko
at North Country Public Radio. We serve the northern third
of New York State. It’s the Adirondacks,
the Canadian border. 250 or so small communities
scattered across the region. And then there’s a lot of trees. We wanted to do
a digital first project to explore the history of work
across the region, and we decided to do it
with photographs. So we’re going around
to libraries, little
historical associations scanning photographs,
encouraging members
of the communities to bring their…
rummage their attics and albums
for those photographs, tell us their stories. We wanted to organize
the content in a way
that was very public facing, highly searchable,
curiosity encouraging,
discovery encouraging. And we discovered there’s
no platform out there
that does this. Oh, we looked at PastPerfect.
I mean, we looked at
a bunch of stuff, and unless you want to spend
hundreds of thousands
of dollars, the platform doesn’t exist,
so we decided to build it. And after we got it
to the beta stage, we came to the N.E.H.
to do two things. White label the platform
so that we could share it with other organizations
in humanities and public media
to use for their projects, which could be organized
around photographs or you can organize them
around other forms of media, and certainly
you don’t have to explore work. You can explore art, culture,
whatever you want to explore. So we’re white labeling it
this year, and we’re adding some features
including–I’m going to steal
from the Library of Congress. I’m going to steal
the guess the date feature. It’s all WordPress based,
but what we did was we built
a relationship mechanism within WordPress
that doesn’t exist. We wanted this to be very easy
to apply anywhere in the world including in organizations that
don’t have deep digital chops. So it’s open-source, WordPress, and lots of new features
coming in the coming year. I’m not going to go into them,
but if you’re interested, you can reach me
at [email protected] Thanks. Audience: [Applause] Ellen:
Oh, since I have time left, there are some of
the other screen shots from us. Am I hitting
the right thing here? Did we get all 3? And that’s just the WordPress
back end architecture. Audience: [Applause] Maylines:
Hi. I’m Maylines Hunter, a doctoral candidate
in the department of
comparative literature at Stanford University. I’m here today
as project manager
for Global medieval Sourcebook. Today in the audience with me
is the P.I. of the project,
Dr. Katherine Starky, a professor
of Medieval literature
at Stanford University. The GMS is an open-source,
open access online resource which provides students
and scholars access
to medieval writings from a wide diversity
of languages, traditions,
periods, and places in new English translations
alongside the original text and digitizations of
the manuscript or print source. Our project responds to
a disconnect in our field
of medieval studies between the translations
that scholars make often to support
their own research and the texts that students
and the interested public
have access to. Scholars often have little
opportunity or incentive
to publish translations of short or noncanonical works, but they can often
make great teaching material and can provide necessary
disruption to a narrow
and outdated canon. As the recent blockbusting
successes of “Game of Thrones” and “The Lord of the Rings”
suggest, fascination with
medieval culture
goes far beyond the classroom. It even seeps into contemporary
political symbology, making it all the more urgent
that high-quality information
be made available to challenge false notions
and misunderstanding. This is why our project
not only presents but
contextualizes texts with accessible introductions
and recommendations
for further reading. Through showcasing the diversity
of global text production
between 600 and 1600 C.E., we seek to provide a platform
for exploring patterns of
influence and exchange, fostering collaboration between
those working on Western Europe, the Byzantine Empire,
the Islamic Empire, Middle Imperial China,
and beyond. Oh. In designing our platform,
we strove to balance function with interoperability
and easy of maintenance. So our technical lead,
Michael Widner, customized existing software
called Versioning Machine that reads texts encoded
in XML T.I. and displays them
in parallel panels that users can select
and move around as desired. This allows us to display texts
which comes down to us
in several versions without artificially
amalgamating them
into a best reading. In the future we will adapt
this software further to allow readers to pull
together their own choices
of texts onto a single page which we hope will
facilitate interesting
comparative work. Currently we have more
than 100 texts in the pipeline
in almost a dozen languages, but in the next year,
we will expand our collection with an emphasis on currently
under-represented regions
and languages. With our grant from the N.E.H.,
we will improve our integration
of high-quality images and add text annotation
pedagogical use as well as broaden our
submission possibilities through
a crowd sourcing interface. We are planning to collaborate
with developers at Stanford’s Center
For Interdisciplinary
Digital Research to build interactive map
and timeline visualizations
of our repository with Stanford University Press
to establish peer
review procedures and with Stanford Libraries
to plan for the project’s
preservation. Check out our prototype
at Gms.stanford.edu. Thank you very much. Audience: [Applause] Brian:
Let’s see. Ah. Hello. I’m Brian Wagner
from U.C. Berkeley, the project director for
Louisiana Slave Conspiracies. We are
a collaborative research project dedicated to preserving,
digitizing, transcribing,
translating, analyzing, and publishing manuscripts from
several slave conspiracies, in particular two that occurred
at the Pointe Coupee Post in Louisiana in 1791 and 1795. You can see on the map there
in the red triangle where Pointe Coupee is located. These manuscripts are
almost entirely testimonies taken from slaves accused
of conspiring to revolt. The stories they have to tell
are amazing. This gives you a sense
of what the documents look like. So far we have digitized more
than 1,800 folio pages
in French and Spanish, crowd-sourced their
transcription and translation, and extracted demographic
and geospatial information relevant to persons and places
involved in the conspiracies. That’s a current data model. We are building an online
interface with a facing
page display to present manuscripts
alongside transcriptions
and English translations, permitting users to navigate
among documents and custom
interactive historical maps. We are preserving everything
for long-term access through
the California Digital Library. So now I’m, with some generous
support from the ODH, we’re entering a new stage
in which we are attempting
to take a new approach to a general problem
that is widely recognized in the existing historiography
on slave conspiracies. The problem is that knowledge
of these conspiracies is based
on unreliable evidence. Confused and self-contradictory,
paranoid, and delusional. Available court records,
council minutes, and government correspondence
do not give us a clear picture
of events. In our case, testimonies
are based in rumor and hearsay. They feel a little like
they’re from the film “Rashomon” in that we have different people
telling different stories
about the same set of events. So not surprisingly based
on this unstable information, scholars have
come to diametrically
opposed conclusions even as they’re working
from the same evidence. So what we’re trying to do
is bring data science approaches
to bear on this long-standing problem. Our intention isn’t to
resolve things once and for all or break the impasse
in scholarship, rather it is to parse
and represent our uncertainty about
conspiratorial collaboration
and communication through text analysis,
including named entity
recognition and network and– Is that the final thing?
No? Ok. Just checking.
Just checking! So through text analysis
and narrative and network
visualization, in particular using
the software package Gephi to analyze competing accounts
of social organization
and political imagination. The technical challenges here
are considerable, not least
the challenge of balancing
the categorical certainty implied by maps
and data visualization with the inconclusive nature
of our evidence. Ok. Audience: [Applause] Scott: Hello.
My name is Scott Branting. And on behalf of my co-directors
Laurie Walters and Joseph Kiter from the University
of Central Florida, I’m presenting today
on the DATCH Project. DATCH stands for documenting
and triaging cultural heritage. At its essence, it’s a software
development activity that we’re going
to hopefully expand the use
of augmented reality devices to allow for at-scale capture
of actual digital content alongside
its more traditional uses
for display and interaction. Now, this grew out of
two different hats that I wear. The first hat is
as an archaeologist
for the past 25 years. While as archaeologists
we are constantly dealing
with faulty space and time. We never have enough time when
we’re actually in the field, and one of the
time-intensive tasks that we do, one of the most
time-intensive tasks is actually the recording
of sections, plans, stone-by-stone drawings
of the excavations
that we undertake. Traditionally we have done this
with plumb bobs and with rulers and with pieces of paper
and pens, and it takes an extraordinary
long time to be able to do this
in the field. At times, certainly hours
if not days and weeks in particularly complex areas. We’ve since shifted as a field
to using more photography and a particular photogrammetry
which has sped up that process. However, if we have
an augmented reality device where we could capture
an image with the augmented
reality device, have it then be registered
in real time to the object that you’re
seeing in front of you at scale, and then could use
your little finger to
draw along the outside, we’d be able to speed this up
exponentially. The other hat that I wear
is as one of the
project directors for the American Schools
of Oriental Research Cultural Heritage Initiatives. This is a project that,
in collaboration with
the U.S. State Department, has been monitoring the cultural
heritage destruction taking place within Syria
and Iraq and Libya. We’re both monitoring it,
but we’re also collecting
information about it and documentation
about the damages occurring. And one thing that we’ve
looked forward to as a project is eventually
when the conflict ends or in other situations involving
social or natural disasters we can send first-responders in
who are able to do a triage and understand the amount
of damage that’s taken place and put forth the first steps as to how we can actually
go ahead trying to mitigate
the damage or what we can do with the
cultural heritage that’s left. One of the big problems
that they also face there
would be time, and so if we can load up
ahead of time the data
that we’ve collected into the augmented
reality device, have all ready to go
stone-by-stone plans that they can see in real time
over the cultural heritage, we can greatly speed up
the amount of time and the amount of sites
they’ll be able to then cover as they move through
wherever they’re going to within
the areas in the Near East. Thank you very much. Audience: [Applause] Christine: Hi. My name is
Christine Frei. I’m the director of
the Penn Language Center at the University
of Pennsylvania, and I’m here with my colleague
Timothy Powell, who is the director of EPIC, which stands for
Educational Partnerships
with Indigenous Communities. The University of Pennsylvania
offers 42 different languages
each semester. We are very much committed
to language revitalization
and also language diversity. Thus we are very much committed
in this project, thanks to N.E.H. to also include
indigenous languages. We are currently offering
several indigenous languages and also
less-commonly taught languages, and the project that we’re
embarking on is really how to
look at Native American tribes use digital technology
to revitalize, preserve,
and represent their respective
language and culture. We are particularly interested
with our partners at the
University of Pennsylvania, the Price Lab
for Digital Humanities, to also develop tools
that can really help
our indigenous communities to represent their knowledge
of a non-linear understanding
of our world. And so thus I think it’s going
to be really interesting to see
what kind of tools will come up. Timothy: So the grant itself,
we’ve been working on this
project now for about 17 years. And the grant, what we’re going
to do is we’re going to form
a consortium, that includes the National
Anthropological Archives
at the Smithsonian, Recovering Voices,
the Library of Congress, The American
Philosophical Society, and The Archive
of Traditional Music at Indiana. All of these institutions
are currently digitizing
Native American materials. The problem is it never reaches
the reservation. So just to give you
a quick example, The American
Philosophical Society
has the Ella Deloria collection. Ella Deloria was the first
Native American anthropologist. She’s from Standing Rock. She devoted her entire life
to documenting language. The tribe has never seen
all the unpublished material. So I called up my friend
Don Burdette at Archive
of Traditional Music, and he says,
“Oh, well, we just digitized
the Deloria wax cylinders.” So when you bring those two
things back to the community, the community is going to build
a Deloria Center, and at the Deloria Center,
they will train
for the first time young, upcoming Native American
kids digital humanities skills. What does digital humanities
look like from an Native
American perspective? Well, to give you
a quick example, we did a timeline
of Iroquois history, and you know, being
white people, we were like,
“Oh, well, it starts in 1492.” We went over and talked
to the elders and talked
to the fluent speakers. They were like, “No, no.
It begins when Sky Woman
fell from the Sky World.” Well, you can’t put
the Sky World on story map,
and you can’t include– There’s no chronological
timeline to date that event, and so what we’re hoping
is that we can help
the digital humanities by teaching how to incorporate
different cultural perspectives. Thank you. Christine: Thank you. Audience: [Applause] Laura: Hi. I’m Laura Aydelotte
from the Kissalack Center
For Special Collections, Rare Books, and Manuscripts
at the University
of Pennsylvania. Ha ha! Fellow Penn person.
Ha ha! And I’m here to talk to you
today about the Philadelphia
Playbills Project. So when I was doing curatorial
work for the Furness Shakespeare
Library at Penn, I discovered that we have
thousands of playbills
sitting in boxes back in the back stacks
that nobody ever sees. And that furthermore,
these are uncatalogued, so researchers don’t even know
that these resources exist. And this is true
in libraries around the country, that you’ve got literally
millions of playbills
sitting in archives and nobody’s really looking
at them. And if you’re looking
at the screen right now, you may be leaning in
and saying, “Darn, I wish I could read
all the text on that playbill.” You have thereby identified
the value of these things, which is that they are filled
with information. You’ve got
the names of the theaters,
the titles of performances, the names of actors, dates,
prices, all kinds of stuff. This is data in these
19th-century ephemeral objects. So I said to myself not only
do I want people to know these things exist
and have records for them, but I want to be able to type in
a query for Mozart and know that on March 22, 1822, there was a performance
of Mozart’s “Marriage of Figaro” followed by the appearance
of a living elephant, and I want to know that
that happened on the stage
at the Walnut Street Theater pictured in the middle there, and that that’s one of
the oldest theaters in America. It’s actually the oldest
continuing operating theater
in the nation. So how do we get from
these objects sitting
in archival boxes to data? So this is one of
my favorite playbills because it’s a production of
“Hamlet” followed by the play
“Your Life’s in Danger,” so it’s kind of
the lightning round talk
for Shakespeare’s longest play. And so we take–We’ve got a
sample set of about 700 images that we’ve digitized
at Penn libraries of playbills like this
from the 18th and 19th century
performances in Philadelphia, and we’re going to use the
Ensemble transcription software developed by
the New York Public Library, and further adapted by the Yale
University Library Digital
Humanities Laboratory. So we’re working with NYPL
and Yale DH right now to get Ensemble up on Penn’s
servers so that everyone in this
room, everyone in Philadelphia, everyone in the nation can
transcribe these playbills
and we can turn them into data and we’ll have the seed for like
the IMDB of the 19th century. And once we’ve got that data,
towards the end of the project, we’ll have sort
of an experimental stage where we start adding U.R.I.’s
to that data. We also existing records
for the Furness Theatrical
Image Collection. We’ll turn that into linked data
as well, and in-house we’re going
to start playing with how we can connect the data
from the playbills. Great things like a real human
skull that an actor donated
to the Walnut Street Theater, to pictures of actors,
et cetera. So it’s going to be great. Thank you to the N.E.H.
for recognizing every show
needs a backer and getting this one
on the road. Thanks. Audience: [Applause] Steve: Hi. I’m Steve Jones of
the University of South Florida, and I’m director of a project
called Reconstructing the First
Humanities Computing Center. Our team is studying the history
of Father Roberto Busa Center
for Literary Data Processing, which he often just called
his lab. First established fully as here
in Gallarate, Italy, outside Milan from 1961
to 1967. There, punch card operators,
mostly young women, worked on a massive lemmatized
concordance to the works
of St. Thomas Aquinas, on an index to the newly
discovered Dead Sea Scrolls,
and on other projects. This was a former
textile factory. You can see it in the long floor
with the accordion roof and multiple skylights for
efficient distribution of light
down onto the line. Busa repurposed it
from textiles to texts, woven fabrics
to skeins of words. About 80 of the photos
in the Busa archive
are of this center, almost exclusively
of its interior. When I visited the site, I learned that the long building
we see in all the photos
is gone. It was demolished
by the early 2000s. Only the square entryway
courtyard building remains,
and it was recently renovated. A local team met me there
last November, including the architect
of the renovation and two of the former
punch card operators, Jesa Closta
and Livia Conostrolo. Unsolicited, Conostrolo brought
with her this hand-drawn floor
plan of the demolished building, which she had sketched
on graph paper, of course,
that very morning at breakfast along with a personal snapshot
from inside the lab, an image that counters
the official archive
in some interesting ways. With the help
of Conostrolo’s drawing and additional research
by the architect
in her CAD files, I’m now working with USF’s
Advanced Visualization Center
on a series of 3-D models. This is just one early
sketchy prototype made in Maya, then moved into
the Unity 3-D game engine
for further development. In fact, this snapshot is
on the development website
for the project as a whole. As we refine the models,
we learn more about design, infrastructure, and work flow
of this historic lab. We’re modeling individual
machines used one at a time and digitizing a selection
of materials in the Busa archive to be integrated with the models
along with punch card emulators and interviews with
former punch card operators. The whole process of multi-modal
modeling is extremely valuable
in itself but perhaps especially because
it so often reveals
what we don’t know when it comes to the history
of this first humanities
computing center. Thank you very much.
Thank you, N.E.H. Audience: [Applause] John: Ok, so this project
is a collaboration between University of Virginia
and Marymount University, and we have
3 project directors– myself, John O’Brien at U.V.A., Tonya Howard,
Marymount University, and Chris Rotollo
of the University
of Virginia Library System. Tonya and I are both
professors of English, and Chris is
a research librarian, and our project literature
and context and open anthology is designed to work
with students to build
a digital anthology of literature in English, a place with text edited
at least in part by students
for students and a place that will serve
as a repository of reliable,
fully contextualized
works of literature that will take advantages
of the affordances offered
by digitization. We hope through the time of
this grant to create a prototype of what we believe can be
an open-educational resource that will be available
for teachers, students,
and the general public. Tonya: So as teachers whose
primary objects of study are texts and texts
in the public domain
at that, where our students get these
texts is of crucial importance, as I think many of us
in this room know. Print anthologies can often be
cost prohibitive as some of our
colleagues earlier pointed out, and so our students turn easily
and uncritically to the wealth
of online materials. However, it’s really hard for
learners to assess reliability
or discern addition especially because the most
easily Google-able of e-texts often have no sense of
materiality of or historicity. So unlike scholarly editions,
where our students
often find on line incorporates little
if any annotation, and being spread across the web,
it’s also very hard to teach
with online texts. So with this help
from the N.E.H., we hope to combine our
two existing prototypes
to build a solution. Chris: And we’re calling that
solution Literature in Context, a Classroom Oriented
Open-Anthology of
Literary Works. Open in the sense that the texts
will be freely available
to the public and built on an open platform
using open standards. In its initial phase,
the project will focus on text published between the years
1650-1800, on shorter pieces
that are frequently taught
in college classrooms, and therefore will have
a wide potential reader base. Each text will include
a scholarly introduction, a carefully edited base text,
explanatory notes, representative page images, and clear
bibliographic provenance to orient the reader
to the text material’s history. We will also develop resources to help instructors
engage students in the task of editing and annotating
literary texts that can be added
to the collection later on. These materials will likely
include P.E.I. templates
for encoding the texts, training materials,
sample assignments, annotation guidelines,
and so on. And by including students in
the production of the anthology, we will engage them directly
in the public construction
of knowledge and we hope that the anthology
will become a widely adopted
open-educational resource, which will help reduce
the enormous cost of
course materials for students and provide high-quality
literary texts for all readers. Thank you. Audience: [Applause] Jerry: Hi. Jerry Morse from
the University of Wisconsin. My co-P.I. on this is Eric Hoyt,
also from U.W., Madison. This is Adam Curry here,
who you may know from his time
as a Veejay on MTV back in the days when MTV
used to play videos. But here Curry is asking
his Twitter followers for any copies of his old
podcast, the Daily Source Code, which he produced
from 2004-2013. His show is often cited as
one of the first ever podcasts, and yet even Curry didn’t know
where these historically
significant files were. Curry is not alone. Many other podcasters don’t have
the resources or knowledge to properly preserve their work or they simply don’t realize
that just by virtue of the fact that they’re taking part
in a format’s infancy
they’re also making history. We’re now in what many critics
have called the golden age
of audio. There are over 300,000 podcasts
in 8 million episodes in over 100 different languages
and more are launching
every day. Yet despite the excitement
over this audio explosion, the sounds of
podcasting’s nascent history remain mystifyingly difficult
to analyze. There are few resources
for anyone interested in researching the form,
content, or history of podcasts and even fewer tools for
preserving and analyzing the sounds of this golden age
of audio. This is what fueled
my colleagues and I
to create Podcaster, short for podcast research,
a data base of podcasts which is now one of the largest
publicly oriented collections
of its kind, one that tracks, indexes,
stores, and preserves
over 1,300 podcast feeds and over 300,00 audio files
and its metadata. It’s still very much in beta,
but you can search audio files
for key words, stream audio, find interactive transcripts, and conduct advanced search
filters by date, length,
and other parameters. We also just launched the pilot
episode of our podcast about
the podcast project, so you’ll find that there. Thanks to this N.E.H. grant,
we’re also working on other ways
to analyze podcasts either by visualizing
large-scale question, such as the occurrence of key
words or number of podcasts, or by analyzing the sound itself
through wave form
and spectrum analysis. Could tools like this give us
clues to the overall production
quality of podcasts, if it was made by amateur
or professional, say? Or could we make guesses
as to the gender of the host? Podcasts are often critiqued for
being overly or predominantly
male hosted, so could we figure that out
through software? Ultimately the goal
is to create a data base that will let researchers use
and study audio files in a way that is familiar and is
useful as textual resources. We may be in a golden age
of podcasts, but if we’re not making efforts
to preserve an analyze
these resources now, we’ll find ourselves
in the same conundrum many radio, film,
or television historians
now find themselves in, writing, researching,
and thinking about a past
they can’t fully see or hear. Thanks. Audience: [Applause] Maddie: Hi.
My name is Maddie Berker. I’m the project director for
the London Stage Database. Our team is based on my campus,
Utah State University, and has members at a variety
of other institutions,
as you can see here. We’re working to activate
and make accessible the data contained
in the London Stage 1660-1800, an 8,000-page reference book
published in the 1960s. It contains
a wealth of information
about nightly performances in London’s public playhouses
over a 140-year period compiled from
archival playbills,
newspaper advertisements, interviews,
and box office count books. And it was very shortly after
these books were published that the editors of London Stage
commissioned a corresponding
computerized data base. The London Stage
Information Bank,
as it was then known, was developed from 1970-1978
with support from the N.E.H. as well as ACLS,
the Mellon Foundation,
and others. You can see here images of
Project Director Ben Schneider
and his team at work as well as the title page
of a memoir he wrote about
the start-up phase called
“Travels in Computerland.” Unfortunately the data
fell into obsolescence
after only a few years, and it has long been thought
irretrievably lost, and so ours began
as a recovery project. Over the past two years,
I and my collaborators have recovered a dirty version
of the project’s data and printouts of its code base. You can see a small snippet
of the documentation
on the screen there. All of these artifacts were
thought to be completely lost when the magnetic tapes
disappeared from the Harvard
Theater Collection in the 1980s, and if you want to learn more
about this recovery process, it’s detailed in an article in the current issue of
“Digital Humanities Quarterly.” We’ve been hand correcting
errors and gaps in
the damaged data and prototyping a person program
based on the principles of
the originally programs, and our next step is to develop
a more robust relational
data model, transform the data
into preservation formats, and create a web interface
to allow users to query
the data base or to download the full data set
for exploratory analysis. This revitalized data base
will be the foundation for a multi-layered
research environment in which users can view
and compare the pages of
the London Stage Reference Book with the flat file data
produced in the 1970s as well as the updated
parsed data and the new relational model,
and these are just wire frames, but you can see a mock-up
of how that might look. By offering users
a unique opportunity to look underneath the hood
of an early humanities
computing project, we aimed to foreground
the situated nature of the data and the history
of its transmissions
and transformations. Our data base
will thus allow scholars both to harness
the possibilities of
quantitative methods in approaching this data and also to cultivate
a critical stance towards
the digital condition. More broadly our project
seeks to model best practices for recovering
obsolete digital projects, a problem we will continue
to face more and more, and for promoting
the sustainability
of current projects. And finally we see this
as the first step in the
creation of a platform that can interface with other
digitization and data collection
projects now underway, including the work
on playbills at Penn, work on 19th-century theater
at Oxford, work on French theater
happening at M.I.T., and to enable the future growth
of a network of related
data bases and tools for theater research
across time periods
and national traditions. Thank you. Audience: [Applause] David: Hi, everybody. I’m David Bannon
from U.C. Berkeley. This is Taylor Berker Patrick
from CMU. We also had a host of other
advisors for this project
as well. Henna Alporabram,
Sara Cohen, Steven Downey, Todd Hickey, Ted Underwood,
Mike Full, Mike Whitmore. And what we’re talking about
with this project is reasoning about the visual
information in printed books. In many ways,
the origin of this work is really rooted in a reaction
to the large-scale of
digitization efforts that a lot of you probably
are familiar with. So in the past 15 years,
Google, the HathiTrust,
the Internet Archive have been creating really
large-scale digitized
collections of books, and these books have been
really amazing for a lot of
computational analysis at scale. They help us understand
the contours of genre, for understanding how attention
allocated to characters as
a function of their gender, they’ve helped us understand
how attention is allocated to geographical cities
in the United States. Now, the way that a lot
of the pipeline works for
these digitization efforts is all the same for each
of these different projects. You start with a book, right,
a material thing. You scan it to get a page image,
and then you OCR that text. You recognize the actual
characters in that image
to know what those words are. So what you get out of it
is something that looks like
the right-hand side here, right? This is Keats’ “Endymion.” And with each of these layers
of processing, we add one step of mediation, that from the original book
that we start with that has some physicality,
that has materiality to this string of characters
and numbers. So in essence, in the entire
representation of a book for many of these projects
always gets boiled down
to this pinch point of just that string of
characters and numbers
it contains. And for many of these projects
that are large-scale analysis
of books, at some point, it no longer is
meaningful to call this string
a book. This is a book, right? This is William Morris’ version
of all of Keats’ poems from the Kelmscott Press
in 1894. We can see there’s a much more
rich source of information
in this actual page image than in
the simple representation of the characters and numbers
that we get from the books
beforehand. So what we’re aiming to do with
this work is try to bring back
some of this mediation that has separated us
from this source text by asking, how can we actually
reason about the appearance of the practical position of
information within books to help us learn more about
what we can extract from that for the process of just
understanding the text itself? Taylor: There’s 3 main lines
that we’re looking at in terms of building
computational models that make use of
paratextual information. The first is about extracting
paratextual elements like footnotes and [indistinct]
and segmenting this out and also reasoning about it
jointly in a problemistic model. The second is about doing
automatic bibliographic paths. Doing bibliographic paths
automatically like
composite attribution. And the last one is about
building computational models that can reconstruct
[indistinct]. We want to thank the N.E.H.
for extending this work and giving us money
to continue. Audience: [Applause] Ed: Hi. My name is Ed Baptist. And I am here representing
Freedom on the Move, which as you see is
a crowd-source data base of all North American
newspaper ads for runaways. Or if we use the terms of
historian Graham Hodges,
freedom-seeking people. You can see some
of the collaborators
that we have there including faculty at
several different institutions, and we’ve been supported not
just by the entities at Cornell, but also by a start-up grant
from the N.E.H., which we are grateful for. So these ads began to appear
as soon as newspapers began
to appear in North America. The earliest I’ve found is 1708 in what I think
is the first issue
of the “Boston Newsletter,” which is the longest
continuously running newspaper. In other words, something that
appeared every single week. I often call these ads,
which contain a really tight
amount of information that’s crammed together to try
to identify the person that the enslaver or
the agent of the enslaver
is trying to find. I often call them the tweets
of the master class. And people, you know,
have a sort of a nervous laugh, and it should be nervous
because what these represent
actually is an attempt to activate
a network of surveillance
and policing which by law and by policy
constituted all the white people in the south
in the slave-holding states, and in fact, even in many
of the northern states to entice them to pursue and
study African-American people to see if they were the people
who fit the description, and it’s very easy to see
some of the origins
of our current debates about policing, about violence,
and about vigilante behavior in this process of surveillance
that the runaway ads institute. How many are there? There are something like
100,000-200,000. We don’t know yet. There are several existing
collections, some in print,
some online. All of these take a corner
of the problem. They have different formats,
and they also are mostly
text-based. So what do we plan to do? We plan first of all
to try to incorporate
all the data that’s out there that’s already been collected
and as much as people
are willing to share, and we’ve had
wonderful cooperation
from our colleagues there. And then,
just bounding ahead here– Actually this shows you some of
the process of submitting ads
to be included. This is the A.P.I.
we’ve been working on so far. But just to go back here, when you are using the site
that we’re in the process
of building– We’ve taken down our beta site.
We learned a lot from it. What you’re going to do
is transcribe the ad and then answer a series of
parsing questions about it. Through this we hope
to accumulate a massive
amount of information that we do not yet have. Thank you. Audience: [Applause] Jesse: Oh. That’s not me.
Ok, here we go. Hi. I’m Jesse Casana,
department of anthropology
at Dartmouth. I’m an archaeologist, so I think
that most people would imagine that that means I spend
a lot of my time digging up
ancient artifacts, but my real interest is actually
in trying to explore
the human past through the analysis of remains
that are inscribed throughout
the entire landscape. Excavation is really
not the best way to do that because it’s so slow,
it’s extremely labor intensive, and it’s destructive, and therefore it’s necessarily
always limited in scope. These pictures are from
a project that I co-direct
in the Kurdistan region of Iraq, and there,
even with a large team
and a whole month in the field, we could only excavate
these 4 tiny holes on a site that measures
more than a kilometer across. So even using other kinds
of geophysical technologies, like the magnetic radiometer
that’s pictured here that enables us to see below
the ground over larger areas, but still it would take us
something like 60 days
of hard labor with a well-oiled team
just to cover this one site, let alone
its surrounding landscape or any of the other hundreds
of sites that we’ve documented
throughout this region. And so what my project is
really dedicated to developing is the use of a new technology
of aerial thermal imaging as a means to explore
the archaeological landscape on a bigger scale than is really
otherwise possible. Since the 1970s, archaeologists
have sort of hypothesized that ancient cultural features
on or below the ground, like say a buried stone wall
for example, would probably heat and cool
at a different rate than
the soil that they’re in, so like in theory
all we would need to do is to capture a thermal image
at high resolution
of the ground at an optimal time
in the diurnal cycle, and it turns out
that’s the middle of the night, and bingo, you got yourself
a map, right? But it wasn’t really
technologically feasible
to do that until just a couple years ago, so our work is now using
these very advanced drones
and navigation software. They’re equipped with the latest
generation of this new kind of
radiometric thermal camera. We use that to collect imagery
over very large areas, about a kilometer a day
around archaeological sites, and then we process that imagery
within a G.A.I.S. framework to reveal subtle
archaeological features like the buried building
foundations, pathways,
and gardens that are visible here
in our recent work at a 19-century Shaker village
in Enfield, New Hampshire. Through this project we’ve
partnered with numerous
other archaeologists to collaborate
on these kinds of surveys at sites in the United States,
Mexico, the Mediterranean, the Middle East, and elsewhere, and we’ve already had
some pretty cool results. Last October, for example,
we surveyed a 16th-century
Native American settlement located on U.S.
Forest Service land in the National
Tallgrass Prairie. Notre Dame led excavations
at the site in the past
couple of years. They had included
about 50 volunteers, but they were necessarily small
in scale and they hadn’t found
any structural remains. Our thermal survey was completed
in just two days. It enabled us to explore the
entire region in and around this
series of World War II bunkers. We found house compounds
plus this really cool unusual
hexagonal earthwork. So collaborations like this one
are ongoing and hopefully will
lead to lots of other
exciting new results. Thanks. Audience: [Applause] Interpreter for Patrick:
Good afternoon, everyone. My name is Patrick Boudrous,
and I’m the senior investigator along with Dirksen Bauman
at Gallaudet University, and we have started
the Deaf Studies
Digital Journal. That’s our project. So Dirksen’s going to start
with talking a little bit about the history of the project
to start. Interpreter for Dirksen:
Now, the Deaf Studies
Digital Journal was founded back in–
the idea, at least–in 2002, and it was a long,
arduous process to get
it into development. but it is the world’s
first peer-reviewed
academic digital journal produced for sign languages. Now that part’s really important
because when we look
at language, for many years American
Sign Language was overlooked
as an actual language until as recently as 1996
by the MLA. The MLA has categories
of language, and up until that point,
American Sign Language was under the category
of invented language
along with Klingon. So obviously there’s something
wrong with that picture, right? And so it’s really important
that we share the wisdom, the scholarship
of the deaf community and challenge the norm of what
was originally thought of
of American Sign Language and make that knowledge public. So we founded
this academic journal. In 2009 was our first launch. Where we then thought of
how could we make citations
in American Sign Language? How could we have academic
discourse in sign language
in a digital platform? And so that was something we had
thought about all along the way, and then had a platform
along to go with it
using Flash at that time. Now, Flash was the thing
back then. Not so much anymore. But thank you to N.E.H., we can
use a new digital platform in order to get our
academic journal back on track. And so now I’m going to
turn it over to Patrick,
who is our executive editor. Interpreter for Patrick:
Now this was what our current
digital journal looks like. It was as late as two years ago.
We were moving forward. At that time the idea behind the
concept was a head of its time and didn’t quite have
the technology to back it, but this is what
the platform looked like. There were different articles,
and it’s still actually
running now. You can check it out
if you’re interested. And like Dirksen said,
it’s Flashed-based, and I mean,
we were kind of overwhelmed with what we wanted to do
and what the technology
could provide us, and so there was
a big gap there, and that’s where N.E.H.
is stepping in to fill that
and support us with a new platform, supported also by
the University of Michigan using their Fulcrum platform,
which is a very strong network
of library archives that will support us
in order to do the work
we want to do and will create
a library interface data base where we can create
accessible scholarship articles and archives
to all we want to do. Now, the library of technology, we also have tons of different
avenues that we’re going
to work with in order to make this possible. So we’ve had 4 issues so far. We want to get those
into the new platform and then continue to grow
from there into our future. So here’s our plan
with what we want to do
with the grant from N.E.H. So we want to preserve what
we already have and migrate it
to the new platform, using University of Michigan’s
team in collaboration in order
to figure out the best way for academics to get access
to scholarship articles
in an academic platform, and then also our plan is to
disseminate, create open access
to articles and scholars alike. Thanks. Audience: [Applause] Megan: Hi, there. I’m short.
I’m Megan Brett. I’m the digital
history associate–I’m sorry– at the Roy Rosenzweig Center
for History and New Media
at George Mason University and project manager
for the project Transcribing and Linking
Early American Records
with Scripto and Omeka S. In 2010, CHNM received
a start-up grant which funded
the development of Scripto, a free and open-source
translation tool for the web publishing platforms
Omeka, Drupal, and WordPress. And over the last 8 years, Scripto has continued to be
one of Omeka’s more popular
plug-ins, enabling organizations
and individuals to connect
with their communities and open up
their intellectual work. Projects like the University
of Iowa’s D.I.Y. History, University of Delaware’s
Colored Conventions Project, and as you’ve heard on NPR,
Villanova’s Last Seen. These are all powered
by Scripto. But Omeka in the last 8 years
has continued to develop, and as Patrick mentioned,
we now have Omeka S, which shares the same goals
and principles as Omeka Classic but has a completely new
code base, which compliments
all of the features you love with a new data model
that integrates linked
open data principles. So…we are redesigning Scripto.
Yeah, ok. Redesigning
the complicated plug-in such as Scripto requires
a project already implementing
an existing version of Scripto with a large digital collection
to be a robust text case, and in this instance,
this is the papers of
the War Department, which is the small graphic. PWD publishes
nearly 43,000 documents
from the early national period for scholars, students,
particularly genealogists
and history enthusiasts for free online with concise
document descriptions, and then people volunteer
to help us transcribe
the documents. The website currently is built
on a very old early 2000s
custom data base, and with funding from another
source, we will be redesigning
the PWD2, update its classic look. So the scope of our advancement
grant is to update and redesign
the Scripto transition tool to make it compatible with
the net data architecture
of Omeka S, work which has already begun
in GitHub, if you wanted to
take a look, and use the papers of the
War Department as a test case to produce a number
of documentation guides
and publications to support other organizations
in their efforts to develop
their own transcription projects for their communities
with Scripto and Omeka S. Thank you. Audience: [Applause] Columba: I’m Columba Stewart
from the Hill Museum
& Manuscript Library at Saint Johns University
in Minnesota, and I’m here to tell you about
the development of VHMML 3.0, the third iteration
of a web-based platform to deliver digitized manuscripts
from a number of places
around the world. Phase one was developed
with an IMLS grant. Phase two relied on
the Henry Luce Foundation, and the Mellon Foundation,
the later particularly
for metadata, content or system design
as well as content creation, and now thanks to the N.E.H.,
VHMML 3.0. So so far, so boring
because this looks like
many other projects which are designed to display
digital assets. The key to this project
is the nature of the content, which includes recently
digitized manuscripts from places like Iraq and Syria
and Lebanon and Turkey, the manuscripts of Timbuktu,
and various other projects which are making
accessible to scholars manuscripts which are either
defacto inaccessible because they’ve been moved
or are kept behind all sorts
of cultural physical barriers, or manuscripts which have been
recently destroyed in conflict, as is the case with materials we
digitized in places like Syria
and also in Iraq, and the other thing
that this is doing is creating
the largest collection
of digitized manuscripts representing
an array of cultures moving from Europe
all the way to India with a number
of different languages which require description in
various languages and scripts, which represent at this point
Christian and Islamic traditions to the tune of about
200,000 digitized manuscripts, 20,000 of which are
already available on the site, and the eventual goal is
to make all of them accessible. So what we’re doing in 3.0
is working with a platform
as it exists, doing some tweaks, but also
really trying to improve the ability of scholars
to discover manuscripts in that inherently
cross-cultural environment. Because through that you can
trace the influence of these
textual traditions on each other as well as the flow of
transmission of texts. The important thing
about this project is these are the manuscripts
that the Europeans and Americans
didn’t take to their libraries, but these are the manuscripts
which remained with
these communities in their ancient locations
and represent the culture as it was living in manuscripts
as recently as the 1960s. We’re also trying to improve
the communication with our users
and among our users and importantly trying to make
it easier to share metadata
with other projects and to give them better metadata
because we want to play well
with others, so we’ve heard a lot today
about friendships
and cooperation. We certainly want
to foster that. We also have distributed
cataloging projects, working with scholars
around the world, and we want to work
on that as well. My timer says I’m done. Last point, we’re using
the matching funds to build
sustainability, and the reason sustainability
is important to us is our project is sponsored
by a Benedictine monastery
of which I am one. Thank you. Audience: [Applause] Jillian: Hi. Good afternoon. I’m Jillian Galle, and I’m
an archaeologist at Thomas
Jefferson’s Monticello. I’m the project director
for the digital– for this project expanding the
digital archaeological archive of comparative slavery
research consortium, and Worthy Martin with
UVA’s Institute for Advanced
Technology in the Humanities is my co-P.I. on this project. So this project enables
archaeologists with DAACS, which is the acronym
for this long project, at Monticello to extend and
strengthen an international
consortium of researchers working to collect, share,
and analyze archaeological data
from slave-based societies that evolved in North America
and the Caribbean during the early modern era. The project supports major
improvements to open-source
data base infrastructure that is shared
by consortium participants and also provides funding
for training in the use. The new infrastructure will
allow consortium participants
to share more efficiently data from their own research
projects with each other
and with the public via the DAACS website. Now, just a little quick
background on DAACS itself. DAACS was founded in 2000
with a generous grant
from the Mellon Foundation. We bring archaeological
collections to Monticello where they are completely
re-analyzed and then put online, re-analyzed to standards
in a post-press sequel
data base and put online for scholars
and the public to access
for free. So when the project started
in 2000, we had 10 digitized sites
launched on our website. Today, as you can see,
we’ve got over 80 sites and millions of artifacts
that we serve out to the public and images, maps, and other
contextual information. And this gives you a sense
when we did start, when the site
initially launched, we had a small consortium
based in the Chesapeake. Now we have collaborators
around the Caribbean, the U.K.,
and the United States. And this formed the last
research consortium in 2013 with funding from
the Mellon Foundation. It was to take the data base
the DAACS used to deliver data
out to the public and to make that data base
accessible to scholars through a web-based interface. So between 2013 and 2015,
we developed that
web-based interface for access to certified users. We brought archaeologists
to Monticello. We trained them in the protocols
and standards to create
these data. This project now takes
the data base that we developed with the Mellon funding
between 2013 and 2015 and allows us to develop
several other interfaces
for interaction. This will make the data base
accessible not only to museums
and government organizations, but to graduate students
and smaller non-profits, cultural resource
management firms that might not have the ability
to engage with the full system that we currently use
in-house at DAACS. So we certify–I’m out of time. But I think the unique part
of the project is that we are thinking about
how data are created before data go into
the data base as well as ensuring standards
as they go out of the data base. Thank you. Audience: [Applause] Walter: Right.
I’m Walter Scheirer from the University
of Notre Dame. I’m serving as project director
for the Tesserae
Intertext Service, intertextual search access
to digital collections
in the humanities. This is joint work between
the University of Notre Dame and the University of Buffalo. Neil Coffey at
the University of Buffalo
is a professor of classics. I’m a professor
of computer science, making this a fun
interdisciplinary project. So what is Tesserae? Tesserae is a free
open-source search engine that scholars use to detect
allusions in literary texts. Thus far the project has
focused on classics, archiving texts
in Latin, Greek, but now we’re turning our
attention to other literatures, and we’d love to make Tesserae
an accessible tool to any scholars interested
in any textual analysis tasks
related to intertextuality or text for use
in the general sense. So here’s an example of what
the current website looks like. The user is
able to specify text. In this case, the search
was for Virgil’s “Gerogics,” looking for parallels to
the poetry of Catullus. And the search engine
uses some mechanisms
from pattern recognition to surface known parallels or
more interestingly new parallels based on a form
of fuzzy matching. And this could be
in a lexical context,
which is what you’re seeing now with this fuzzy diagram match or it could be a match
in semantics. Does the meaning of the two
texts share some relationship? And most interestingly is there
a relationship in sound? Since we deal a lot with poetry, this is of course a very salient
and interesting feature. And so in the current grant,
we’d love to expand not only the
features of the Tesserae search, but also the access to texts. And what we’re pitching here
is a Tesserae information
service. We propose to develop
a full-fledged web A.P.I. to build out a web service
that could be integrated into other collections
of texts. And in this grant period, we’re
going to look at doing this
with some partner collections like the Perseus
Digital Library, the newly launched
Digital Latin Library,
and also open philology. Also related to this,
we’re very interested in
supporting user supplied texts. We often get requests
from users, asking, “Can I try Tesserae
on my own texts? But “I don’t want to upload
those texts and give you
full access to them because I’m working on
a new critical edition” or there are
some copyright constraints. And so we’d love to be able
to supply users with the
functionality via the A.P.I. to do just that. As I mentioned, again, we’re
still interested in developing
the search features. We’re very interested
in machine learning. That’s what my group does
at Notre Dame, and so we’re going
to be looking at developing
some of those capabilities. And finally a culminating
feature we’d like to develop is something that emulates
human recollection to make this a more
powerful resource for scholars. And I’d like to just thank
the N.E.H. for their support. This is wonderful.
Thank you all. Audience: [Applause] Woman: So let them know
that now we’re shifting to the
institute’s portion of our… We are shifting to
the institute’s portion
of our series here. I’m going to start with
a little bit of a story. I almost always feel like
an impostor when I do my own research
in poetry and text analysis. While I was intrigued
with computers when I was
in fourth grade or so and even took
a basic programming class. When I got to high school
is when my parents bought
our first desk-top computer, and the thing that most
everybody was concerned about was whether or not
we were going to break it. So the people who are allowed to
install new software and change
configurations on the computer were my brother, who was
5 1/2 years younger than me,
and my father. I was allowed to use
the word processing software, and so what I learned very early
was that I was always at risk
of breaking computers and that I should
always defer to somebody else
to maintain them and fix them. That caused me problems
when I graduated and became
the communications coordinator at the Cystic Fibrosis
Foundation. I was tasked with–
Oh, I’m sorry. Are we even at my slide?
We’re not. Sorry. I was tasked with building
the Cystic Fibrosis Foundation’s
first website. And when I did that,
it was fantastic. I learned HTML.
I got it all over my computer. I had created the first website
for the organization, but I had no idea how
to make it visible on the web. I had never learned a FTP
or what a server was, and so I had to ask my bosses’
husband, who was a programmer, and you might predict
what happened. I gave it over to him, and he
and he and his wife uploaded it, and guess who took credit for
having done the whole thing? There are lots of people like me
out there whose stories
are the same and who have some background
in the humanities and who would be great
and would like to get involved
with the digital humanities but have some sense of anxiety
that they’re going to break
the computer or they really don’t know
what they’re doing. And so the digital humanities
research institutes that we’re
doing at the Graduate Center are designed to address
that need, and we have 4 things that
we’re really worried about. The first is that we need to
learn how to reach more people
more quickly, and in an age of austerity
when people are reducing
their travel budget, not many people
who aren’t incredibly motivated are going to travel far away
from where they are to learn something that
they’re uncomfortable with. So we need to go to them. Second, we need to be able to
build up core digital skills, those skills that
I didn’t even know I needed, things like how to work
from a command lane, how to share data effectively,
how to learn early
programming skills. Third, we learn socially
as humanists. We like to discuss,
ask questions, interrogate things, so we need to have a pedagogy
that addresses those needs
as humanists, and fourth, we don’t have
the time to re-create resources
over and over and over again, and so what we’ve been doing
at the GC Digital Initiatives is building the curriculum,
and we’re going to
give it to you. This year, we’re inviting
15 participants to come
to the Graduate Center to learn those basic skills
through our institute and experience the pedagogy, to take that pedagogy back
to their local institutions, to get 20 hour of
individual support, to share that curriculum,
make changes, come back,
tell us about it, and we’re going to create
a guide to leading digital
humanities research institutes that anybody can use. Applications are currently open,
and this is quote from somebody. We’ve run these institutes
and already trained over 150
students, faculty, and scholars through the CUNY System. Using it, you go the whole way
from a command line through machine learning
with Python in one week. Check it out. Definitely apply. Dhinstitutes.work
is the website. Audience: [Applause] Victoria: Hi, everyone. My name is Victoria Szabo,
and I’m here to talk about
VARDHI, the Virtual and Augmented
Reality for the Digital
Humanities Institute. I am in visual media studies, and my collaborator, Phil Stern,
who’s co-leading this venture,
is in history. Now, Virtual and Augmented
Reality for the Humanities,
what does that mean? Well, it could mean something
like historical reconstruction
and presentation. It can also mean museum
exhibition and interpretation. But what are all
the potentialities
for these types of technologies going from the cave
to cardboard, going from the museums
to the streets, and what do they have in common? That’s part of what
we want to discover. So we have a lot of different
research questions that are coming out of
a sense that virtual
and augmented reality have exploder onto
our public consciousness and are very visible to us
in the game worlds and entertainment worlds,
most notably maybe with
something like Pokemon Go, but what do they have
to offer us? We also wanted to think
critically about their use both in terms of
socio-economic considerations
and access considerations, but then also about what
potential violence they may
or may not do to the subject matter
that we’re studying. So some of our key critical
questions are around that idea of the methodological benefits
and risk of doing DR and AR. And relatedly, what we call
the media effects of abstraction, interpretation,
and transformation. So what is it that you’re doing
to your content by putting it into this form? And for this,
we can go into adaption theory as well as reading and critical
theoretical literature and taking seriously
the critiques that some
of our own colleagues have for digital interventions
in general. Also thinking in terms
of communicative value
for our research, and then thinking as well
what we can learn from our colleagues
in the sciences
and social sciences and in the arts about the
strategies that we engage in
in this field. Also thinking about what counts
as good or useful or important. Say you’ve convinced
somebody like the N.E.H.
or someone else to give you money
to do this thing, how do you know
you’ve done a good job? And when is it going to count
for A.P.T. and other things
like that? And then what we plan to do
is in the first year have this two-week session
with a diverse group
of participants. Then in the second week,
bring it all together into
a series of recommendations. So in order to do something
like this, we’re bringing together
a really diverse team. We are in love with the idea
of labs at Duke. Everybody has a lab
for everything. Every time there’s any little
working group, we call it a lab, so we try to sometimes
think about that as being a place of shared mutuality
and collaboration but also a place where
we can truly take seriously the disciplinary perspectives
of others around us. So this is just a list
of some of our team, many of them we’ve worked with
on other projects, and we’re really excited
to see what happens next. Thank you. Audience: [Applause] [Bell rings] Lauren: I’m out of time. Hi, good afternoon.
My name is Lauren Coats. I’m from LSU, and I’m here with my colleague
Emily Nagin for U.G.A., and together we’re directing
an institute on textual data and digital texts
in undergraduate classroom. This institute arose
out of a discussion
among several institutions: LSU, U.G.A., and Mississippi
State University. We noted a common problem
when building up digital
humanities programs at our respective institutions
if not in our states
and our region. While the digital humanities
have gained much ground in
higher education, as we know from
today’s presentations
and the last couple decades, this growth has developed
unevenly. For some institutions
and individuals with little
or no access to the necessary training,
resources, or support, engaging in digital humanities
scholarship remains difficult
if not out of reach. Thus the institute’s focus
on pedagogy is strategic. We see the space
of the classroom as offering the opportunity for
methodological experimentation
on a small scale, which tends to be
less resources intensive. The institute focuses
on incorporating digital
humanities’ methods in text-based humanities courses capitalizing on the classroom
as a space that can bring DH
into many hands, especially the hands
of undergraduates, the next generation
of humanists, and expanding access
while building capacity
at a local level. The institute will introduce
participants to quantitative
visual and computational means to analyze texts, approaches that
require thinking about texts
as digital objects and data. It will also address pedagogical
issues of teaching these methods
to undergraduates, especially in classrooms that
are not focused solely on DH. In recognition of the many ways
that teaching happens on campus, we’re inviting applications from
librarians, graduate students,
and departmental faculty, and please do not apply
because the application period
just closed. I’m very sorry. So likewise, our guides
and teachers over the course
of the institute hold a variety of roles
from librarian to technologist
to professor and come from
liberal arts colleges
and research institutions. Each one has had experience
teaching DH, and we have chosen these faculty
precisely for their expertise
in digital methods combined with their excellence
as teachers. The structure of the institute
takes advantage of the skills
that these teachers can offer by putting participants
in conversation while they first learn
new approaches as well as over
the course of the following year while they integrate
these new approaches
into teaching. Thus as you see in the slide,
we have a one-week
in-person institute this summer
at Mississippi State University hosted by our collaborator
associate dean of library
Stephen Cunetto who’s there. We then have series
of virtual sessions and asynchronous communication
over the following
academic year. Given the institute’s interest
in increasing access to DH, the structure also helps us
to accommodate the schedules
of those in positions where taking more than one
week away from regular duties proves difficult
if not impossible. And most importantly, the
structure gives people support in the year following
the intensive in-person week because we want people
to take what they’ve learned and have time and space
to apply it. So they’re each going to develop
a DH assignment module
or workshop which will then publish
in an open-access repository. All right? Thanks. Audience: [Applause] Gregory: So you’ll all be
relieved that I saw that
I was the last speaker, so I have added no slides,
and I’m going to try to be
even shorter than 3 minutes. Our workshop is called Digital
Editions, Digital Coropora, and New Possibilities
for the Humanities in the Academy and Beyond. It’s a collaboration
of 3 different researchers
or teachers. There’s myself, Gregory Crane. My colleague at the University
of Lipzeig, Monica Berti, and my long-time colleague
Anka Ludling, a corporist linguist,
or the professor of
corporist linguistics at the Freie University
in Berlin. And we’re working with text,
but our particular angle has to do with
the comprehension of text, where the ability of human
beings to work with sources in the variety of languages
that exist in the world. Even within Europe,
we have 24 official languages of which only 5
are spoken by more than 1%
of non-native speakers. 19 of those languages
no one understands. No one can understand the
language that your mother
spoke to you as a child. And so we think about what are
the methods by which we can make
language thinkable, to enable you to work,
to encode, and produce text that can be used
by a wider range of people, So it’ll be focusing
on dense linguistic annotation and contextualizing annotation
that can be made manually, that can also be made
automatically in the interaction between
the two of these processes. We’re really interested
in a range of languages. I work in a project
called Global Philology that’s supported by
the German Ministry
of Research and Education where we’re really
thinking about things like how do you study the ideas
that circulate across
the Silk Road? There’s more languages there
than at least I could ever
imagine learning, so we’re really trying to
get at this kind of problem, and we’re trying to show you
how to think about producing textual materials
that will play the greatest
possible role in the intellectual life
not only of your immediate
colleagues, but of people outside
the academy in countries
beyond the West. So our application deadline
passed, but I think I can make
exceptions for any really
enthusiastic people who want to apply from here
if I get any dispensation
from the N.E.H. So thank you very much. Audience: [Applause]

You May Also Like

About the Author: Oren Garnes

Leave a Reply

Your email address will not be published. Required fields are marked *