decorative dash
spring 22'

Good Systems Speaker Series

 

Session

1 | 2 | 3 | 4 | 5 | 6 | 7

Nike: History of Disability Inclusion and Innovation


May 9, 2022

While Nike is one of the worlds best known brands, many people don’t know that Nike has a long history of disability inclusion including supporting disabled athletes, hiring people with disabilities, and inventing cutting edge adaptive clothing. Today’s presentation will look back at how Nike has invested in the disability community and collaborated with disabled athletes and employees to spark innovation which became known as FlyEase. FlyEase started as shoes that was easier to put on with limited mobility and grew into a hands-free footwear experience. FlyEase continues to grow because at Nike we are never done when it comes to working with the disabled community to push the limits of what’s possible. But most importantly we want to enable the disability community to express themselves through fashion.

megan lawrence
Dr. Megan Lawrence is the Global Director of Accessibility and Disability Inclusion at Nike in the Diversity, Equity and Inclusion. She has worked with the disability community for over 15 years. Dr. Lawrence is deeply committed to “nothing about us with us” and bringing the disability voice to the table to engage in inclusive design though a deep partnership with the Shepherd Center. As a person with a mental health disability, she invests deeply in creating mental health programs that support organizations and people though a community of care model. Intersectionality is at the heart of her work. Disability is not one dimensional. It is though authentic storytelling that we uncover the richness of the diversity of the disabled experience. 

 

 

Ethics and Responsible Innovation in Data-Intensive Smart Cities and Urban Mobility Management


May 2, 2022

Data and information technology have transformed our daily lives and have reshaped the ways in which cities and transportation systems are governed and operated. But too often, urban and mobility analytics is couched narrowly in terms of resource optimization and management on the one hand, and privacy and surveillance on the other. While efficiency and data protection are essential criteria in delivering urban services, there is a need to ensure that data-driven tools and technologies contribute to social justice and are not used to harm vulnerable populations. Currently, there is an increasing focus on bias and algorithmic justice with the aim of steering data-driven systems towards fairness-by-design, data democracy, and data justice, across a wide spectrum of data-driven technology (big data, AI, machine learning). These are consequential future directions since inadequate data and misspecified and myopic algorithms can cause harm and reinforce existing inequities. Using heterogeneous sources of structured and unstructured data in the context of contested streets and uneven transportation access, the talk will highlight the technological, methodological, and epistemological pillars of data-intensive smart city and urban mobility systems, with an emphasis on the power dynamics and political economy of urban data production, and the unintended consequences and governance challenges associated with such systems.

 

 

Thakuriah
Piyushimita (Vonu) Thakuriah is a Distinguished Professor in Rutgers University-New Brunswick and Director of the Rutgers Urban and Civic Informatics (RUCI) Lab. Her research interests are in transportation planning and operations; big data, urban informatics, and social and economic cyberinfrastructure; and the social equity and data justice aspects of data, AI and automation. Her work has been supported by the National Science Foundation, European Commission, UK Research and Innovation, U.S. Department of Transportation, and other sponsors.

She has delivered keynote addresses and plenaries at prominent international and national venues such as the such as National Academy of Sciences, European Commission (Brussels and Luxembourg), the Royal Academy of Engineering in London, and the Leibnitz Center for Informatics in Germany. She was the (founding) director and Principal Investigator of the Urban Big Data Centre, a multimillion-dollar consortium funded by UK Research and Innovation that was responsible for providing a UK-wide urban big data infrastructure. Vonu was previously the Ch2M Endowed Chair Professor of Transport in the University of Glasgow, UK, and a European Commission Marie Curie Fellow. Her postdoctoral fellowship was supported by NSF’s DMS at the National Institute of Statistical Sciences.

 

 

AI, MR, and Inclusion: Challenges and Innovations in Assistive Technologies


March 7, 2022

New digital technologies in areas such as Artificial Intelligence (AI) and Mixed Reality (MR) are opening new opportunities for people with disabilities. The rapid pace of innovation in machine learning and new kinds of hardware has enabled the development of novel assistive technologies that can provide entirely new capabilities for people who are blind or low vision, d/Deaf and hard of hearing, those with mobility impairments, and the neurodiverse. But the development of these technologies is marked by new challenges that may threaten the promise that they provide. Questions of ethics and inclusion are often afterthoughts (if they are addressed at all) in design and implementation that may lead to exclusion of potential users. Conversely, the fear of abuse of technologies (e.g., face recognition) may foreclose potentially transformative tools that can enable people with disabilities. Finally, powerful new AI techniques rely on huge amounts of data, but the resulting models reflect the underlying shape of that data. How do we ensure that this data includes the full range of people in the world, such that the models fully represent everyone? In this talk I will describe some of the new opportunities and innovations in assistive technologies that we have been working on at Microsoft, as well as some of the ways that we are trying to drive research to create more inclusive technologies for all of us.

 

 

Curtell

Ed Cutrell is a Sr. Principal Research Manager at Microsoft Research where he manages the MSR Ability group, exploring computing for disability, accessibility and inclusive design. He also holds an appointment as Affiliate Professor in the Information School at the University of Washington. He received his BA in Psychology and Cognitive Science from Rice University in 1992 and went on to study Cognitive Neuropsychology at the University of Oregon where he received his PhD in 1999. He has been working in the field of Human-Computer Interaction (HCI) since 2000.

Over the years, he has worked on a broad range of HCI topics with a special interest in interdisciplinary work. Research topics have included input technologies, visual perception and graphics, intelligent notifications and disruptions, and interfaces for search and personal information management. From 2010-2016, he managed the Technology for Empowerment group (TEM) in Microsoft Research India, focusing on technologies and systems useful for people living in underserved rural and urban communities. After returning to the US in 2016, his research now focuses on inclusive design, exploring how computing can be used to extend and enhance the capabilities of people with disabilities around the world.

 

 

Analyzing social media from a user-eye view with PIEGraph


February 25, 2022

Quantitative social media research has traditionally been conducted from what might be called a platform-centric view, wherein researchers sample, collect, and analyzed data based on one or more topic- or user-specific keywords. Such studies have yielded many valuable insights, but they convey little about individual users’ tailored social media environments—what I call the user-eye view. Studies that investigate social media from a user-eye view tend to be rare because of the expense involved and a limited number of suitable tools. This talk introduces PIEGraph, a novel system for user-eye view research that offers key advantages over existing systems. PIEGraph is lightweight, scalable, open-source, OS-independent, and collects data viewable from mobile and desktop interfaces directly from APIs. The system incorporates an extensible taxonomy that allows for straightforward classification of a wide range of political, social, and cultural phenomena. The presentation will focus on how our research team is using PIEGraph to examine users’ potential levels of exposure to high- and low-quality information sources across the ideological spectrum. I will pay particular attention to how such exposure may be unevenly distributed across lines of race and gender.

 

 

Freelon
Deen Freelon is an associate professor at the UNC Hussman School of Journalism and Media at the University of North Carolina and a principal researcher at the Center for Information, Technology, and Public Life (CITAP). His theoretical interests address how ordinary citizens use social media and other digital communication technologies for political purposes, paying particular attention to how identity characteristics (e.g. race, gender, ideology) influence these uses. Methodologically, he is interested in how computational research techniques can be used to answer some of the most fundamental questions of communication science.

 Freelon has worked at the forefront of political communication and computational social science for over a decade, coauthoring some of the first communication studies to apply computational methods to social media data. Computer programming lies at the heart of his research practice, which generates novel tools (and sometimes methods) to answer questions existing approaches cannot address. He developed his first research tool, ReCal, as part of his master’s thesis, and it has since been used by tens of thousands of researchers worldwide.

 

Data Feminism


February 7, 2022

As data are increasingly mobilized in the service of governments and corporations, their unequal conditions of production, their asymmetrical methods of application, and their unequal effects on both individuals and groups have become increasingly difficult for data scientists--and others who rely on data in their work--to ignore. But it is precisely this power that makes it worth asking: "Data science by whom? Data science for whom? Data science with whose interests in mind? These are some of the questions that emerge from what Catherine D'Ignazio calls data feminism, a way of thinking about data science and its communication that is informed by the past several decades of intersectional feminist activism and critical thought. Illustrating data feminism in action, this talk will show how challenges to the male/female binary can help to challenge other hierarchical (and empirically wrong) classification systems; it will explain how an understanding of emotion can expand our ideas about effective data visualization; how the concept of invisible labor can expose the significant human efforts required by our automated systems; and why the data never, ever “speak for themselves.” The goal of this talk, as with the project of data feminism, is to model how scholarship can be transformed into action: how feminist thinking can be operationalized in order to imagine more ethical and equitable data practices.

 

 

D'Ignazio
Catherine D'Ignazio is an Assistant Professor of Urban Science and Planning at MIT. She is also Director of the Data + Feminism Lab which uses data and computational methods to work towards gender and racial equity, particularly as they relate to space and place. D'Ignazio is a scholar, artist/designer and hacker mama who focuses on feminist technology, data literacy and civic engagement. With Rahul Bhargava, she built the platform Databasic.io, a suite of tools and activities to introduce newcomers to data science.

Her 2020 book from MIT Press, Data Feminism, co-authored with Lauren F. Klein, charts a course for more ethical and empowering data science practices. Her research at the intersection of technology, design & social justice has been published in Science & Engineering Ethics, the Journal of Community Informatics, and the proceedings of Human Factors in Computing Systems (ACM SIGCHI) and Computer-Supported Cooperative Work and Social Computing (ACM CSCW). Her art and design projects have won awards and been exhibited at the Venice Biennial and the ICA Boston.

 

 

The New York City AI Strategy


November 29, 2022

The recently published NYC AI Strategy is a foundational effort to foster a healthy cross-sector AI ecosystem in New York City. The document establishes a baseline of information about AI to help ensure decision-makers are working from an accurate and shared understanding of the technology and the issues it presents, outlines key components and characteristics of the local AI ecosystem today, and frames a set of areas of opportunity for City action.

 

Parikh
Neal Parikh is Director of Artificial Intelligence for New York City, in the Mayor’s Office of the Chief Technology Officer. Most recently, he was an Inaugural Fellow at the Aspen Tech Policy Hub, part of The Aspen Institute. He is Co-Founder and former CTO of SevenFifty, a technology startup based in NYC, and was a Visiting Lecturer in machine learning at Cornell Tech. His academic work has been cited thousands of times in the academic literature and is widely used in both research and industry. He received a Ph.D. in computer science from Stanford University, focused on artificial intelligence, machine learning, and convex optimization, and a B.A.S. in computer science and mathematics from the University of Pennsylvania

 

 

Claim matching, tiplines & collaboration for effective misinformation response


November 19, 2022

Misinformation is often edited and repeated across multiple platforms, websites, languages, and formats (audio, video, image, text). In this talk, Dr. Scott A. Hale (Oxford Internet Institute and Meedan) and Ashkan Kazemi (PhD candidate, University of Michigan) will detail initiatives to crowdsource misleading content through "tiplines" on messaging platforms like WhatsApp and Telegram as well as state-of-the-art natural language processing approaches to group messages making the same claims. 

While much effort focuses on large, high-resource languages and unencrypted platforms, Tiplines offer the opportunity for users of end-to-end encrypted platforms to share potentially misleading content with fact-checking organizations and check whether that content has been fact-checked. This talk will show how knowledge distillation can be used to create machine learning models for claim matching that perform well on low-resource as well as high-resource languages. 

Meedan, a non-profit building digital tools for global journalism and translation, has built Check, an open-source web-based service to make it easy for fact-checking organizations to run tiplines on a variety of platforms. The research detailed in this talk has been integrated into Check and is used by over a dozen fact-checking organizations across the globe. Meedan is currently working to create better infrastructure to allow academics, practitioners, and community leaders to collaborate more easily on misinformation response.

 

 

Hale
Dr. Scott A. Hale is an Associate Professor and Senior Research Fellow at the Oxford Internet Institute of the University of Oxford, Director of Research at Meedan, and a Fellow at the Alan Turing Institute. His cross-disciplinary research focuses on advancing equitable access to quality information. Scott develops and applies new techniques in the computational sciences to social science questions and puts the results into practice with industry and policy partners. He is particularly interested in multilingual natural language processing, computational sociolinguistics, mobilization/collective action, agenda setting, and misinformation and has a strong track record in building tools and teaching programs that enable wider access to new methods and forms of data.
Kazemi
Ashkan Kazemi is a 4th year PhD candidate in the Language and Information Technology (LIT) lab at the Computer Science and Engineering department at University of Michigan. He works with his advisor Rada Mihalcea on natural language processing and applications in studying social media, and broadly computational social science. His PhD focus is on using NLP to develop methods for understanding and responding to misinformation, within both English and non-English speaking communities. Ashkan is an organizer for "ACL Year-Round Mentorship", a program that provides mentorship to students interested in NLP research from around the world and has served in the program committee of EMNLP, ACL and NAACL conferences in the past.