Workshops

The Center for Digital Scholarship is pleased to offer an expanded selection of introductory workshops. Attendance is free, and all Notre Dame students, faculty, and staff are welcome to attend.

To see scheduled course dates and register, see the Library Workshops Registration Portal. Click the blue "Dates & Registration" button for dates and session details.

Workshops can also be taught on-demand for classes or small groups. Please contact cds@nd.edu with inquiries.


FALL 2014 WORKSHOP SERIES

GIS | Data Use & Analysis | Text Mining & Analysis

Register Now!



Geographic Information Systems (GIS)

Presenter: Matthew Sisk | Data Curation & GIS Postdoctoral Fellow

Geographic Information Systems (GIS): A Brief Introduction

GIS is a system of hardware and software for the storage, retrieval, mapping and analysis of geographic data. It provides a system for organizing spatial and related information into a single analytical framework and is used in a variety of academic and industry settings for understanding spatial relationships. This workshop will address the question “What is GIS,” provide a variety of examples, and present the resources available in the Center for Digital Scholarship. [Related resources]

Basic Satellite Imagery Analysis
Using wavelengths of light beyond what our eyes can see, multi-spectral satellite imagery can tell us a lot about the earth's environment. This workshop will present both the main types of satellite imagery available for GIS and remote sensing applications and some of the different analytical techniques.  No previous use of satellite imagery is necessary, but some understanding of the fundamentals of GIS would be useful.

Georeferencing in ArcGIS

A key component of GIS analysis is overlaying different layers of spatial data.  In the case of scanned maps or photographs, they have to be georeferenced before they can be used. This involves taking an image of an unknown location and telling the GIS software where is located in the real world. This workshop will go over the process for georeferencing scanned maps and images with and without coordinates. [Related resources]

Incorporating Time into GIS
Visualizing time in a static map or in GIS software has generally been a difficult process. Some new innovations have made it dramatically easier, though specific types and layouts of data are required. This workshop introduces the temporal tools in ESRI's ArcGIS software package and goes over the current limitations.

Using Python in ArcGIS
The ArcPy module provides a powerful tool for advanced users to automate GIS workflows, create custom tools and interact with other packages using the Python programming language.  This workshop will demonstrate how to Python scripts for use in ArcGIS.  Familiarity with GIS software is required, but no programing experience is necessary.

Vector Editing in ArcGIS
A common task in GIS workflows is creating vector (point-based) data either from text, scanned paper maps, or from features visible in aerial imagery.  This workshop will go over basic and advanced techniques for creating vector data.

[top]


Data Use & Analysis

Presenter: James Ng | Economics & Business Librarian

Introduction to Using Stata for Data Analysis
Stata is a complete statistical software package that provides everything you need for data manipulation and statistical analysis. It is widely used in empirical research in the social sciences and epidemiology. This workshop will demonstrate Stata’s frequently-used capabilities in data manipulation and analysis. No knowledge of statistics is assumed. [Related resources]

[top]


Text Mining & Analysis

Presenter: Eric Lease Morgan | Digital Initiatives Librarian

Analyzing Articles Using JSTOR’s Data for Research Service
Data For Research (DFR) is an alternative interface to JSTOR enabling the reader to download statistical information describing JSTOR search results. For example, using DFR a person can create a graph illustrating when sets of citations where written, create a word cloud illustrating the most frequently used words in a journal article, or classify sets of JSTOR articles according to a set of broad subject headings. More advanced features enable the reader to extract frequently used phrases in a text as well as list statistically significant keywords. JSTOR's DFR is a powerful tool enabling the reader to look for trends in large sets of articles as well as drill down into the specifics of individual articles. This hands-on workshop leads the student through a set of exercises demonstrating these techniques.

Requirements: Attendees are expected to bring their own computer to the workshop. Attendees are also expected to register for a JSTOR DFR user account prior to the date of the workshop. See: http://dfr.jstor.org/accounts/register/

Analyzing Books Using HathiTrust Research Center [register]
The HathiTrust Research Center (HTRC) enables a person to generate statistical reports against user-created sets of public domain documents from the HathiTrust. Examples include the creation of word clouds, the extraction and tabulation of names, places, or organizations, or the subdivision a large corpus of documents into smaller clusters. Using the facilities of the HTRC a person can discover similarities and differences between texts as well as get an overview of what the texts discuss. Attendees of this hands-on workshop will be led through a set of exercises demonstrating these techniques.

Requirements: Attendees are expected to bring their own computer to the workshop. Attendees are also expected to register for a HTRC user account prior to the date of the workshop. See: http://bit.ly/19HBd63

Simple Text Analysis with Voyant Tools
Voyant Tools is a Web-based application for doing a number of straightforward text analysis functions, including but not limited to: word counts, tag cloud creation, concordancing, and word trending. Using Voyant Tools a person is able to read a document "from a distance". It enables the reader to extract characteristics of a corpus quickly and accurately. Voyant Tools can be used to discover underlying themes in texts or verify propositions against them. This one-hour, hands-on workshop familiarizes the student with Voyant Tools and provides a means for understanding the concepts of text mining.

Requirements: Attendees are expected to bring their own computer to the workshop. Attendees are also expected to bring to the workshop two URLs pointing to plain text (not Word, HTML nor PDF) versions of any novel-length pieces of fiction. These URLs (and the corresponding pieces of fiction) will be analyzed by the student. Project Gutenberg (www.gutenberg.org) is a good place to find novel-length works.

Text Mining in a Nutshell
At its core, text mining is about discovering patterns and anomalies in sets of written documents. Invariably, the process begins with the creation of a digitized corpus of materials. The content of the corpus may then be cleaned up, marked up, and organized, thus making it easier for computers to read and parse. From there, “tokens” — usually words — are identified, counted, and tabulated. These techniques — usually employed under the rubric of “natural language processing” — form the basis for more sophisticated applications. This one-hour workshop familiarizes participants with the fundamentals of text mining — what it can do and what it can't.

[top]