NYC School of Data is a community conference that demystifies the policies and practices around open data, technology, and service design. This year’s conference helps conclude NYC Open Data Week and features 30+ sessions organized by NYC’s civic technology, data, and design community! Our conversations and workshops will feed your mind and inspire you to improve your neighborhood.

To attend, you need to purchase tickets. The venue is accessible, and the content is all-ages friendly! If you have accessibility questions or needs, please email us at schoolofdata@beta.nyc.

Thank you to Reinvent Albany and Esri for helping to cover conference costs and making it possible to meet in 2025.

And If you can’t join us in person, tune into the main stage live stream provided by the Internet Society New York Chapter. Follow the conversation #nycsodata on Bluesky.

Purchase your tickets here.

Join us as we share lessons learned from applying GenAI and Natural Language Processing (NLP) to alternative data sources! We’ll walk through a project where we used Public Pulse Mining to evaluate how the public engages with our the General Services Administration’s construction projects and better understand local stakeholder priorities and perceptions.

Then, we’ll dive into an interactive prompt engineering exercise using our master prompt templates for structuring unstructured data. You’ll gain practical takeaways on using AI for public engagement, including how to extract insights from free-text datasets like NYC public meeting YouTube transcripts, 311 feedback, and consumer complaints.

This session is open to all audiences, regardless of technical background. We’ll also share open-source tools and scripts on GitHub so you can apply these methods to your own datasets!

  • How confidently can we predict the impacts of zoning change on housing supply?
  • Can we use AI to create novel datasets that may allow us to better understand housing phenomena?
  • What would it take to model a reality in which we build 1 million housing units?

These were some of the questions that led Janita Chalam, an independent researcher with a background in software engineering and machine learning, to begin their research journey into discovering how open data, statistical modeling, and AI can help us tackle the housing affordability crisis.

This presentation will walk through what Janita has learned about the variables at play in NYC’s housing landscape and present a statistical analysis of the Bloomberg-era upzonings as a case study in examining the frictions to building more housing in NYC.

Finally, Janita will propose some ideas for what kind of data and methodologies we might need in order to make bolder claims about what it takes to get us out of the housing crisis. By the end of this talk, we will hopefully have a better understanding of the role that data and empiricism can and should play in our conversations about housing policy.

This talk is for anyone interested in housing affordability and will not require any expertise in the technologies mentioned.