An introduction to data and information management process mapping as a needs analysis tool for improving research workflows and cyberinfrastructure.
Participants will learn:
I am a new consultant on campus to help with research cyberinfrastructure
Basics of Process Mapping – Purpose, Capability, Use
Growth Mindset- You don’t know what you don’t know. Be Open to Curiosity and Change. Encourage opportunities to reduce redundancies, inefficiencies, and gaps as well as increase reproducibility, repeatability, and replicability.
The University of Wisconsin-Madison is a large place, with many staff members. One of the struggles that is familiar to many is how to balance specialized needs for divisions, colleges and other groups inside the university, with the administrative goals of simplifying services at a central level. In particular, email transmissions that are handled by software without the intervention of human hands can struggle when modernization occurs: the need for multi-factor authentication and deprecation of POP/IMAP protocols conflicts with services run by groups such as the college of Engineering.
My presentation goes over the journey I made as the sysadmin for a ticketing program that needed to be able to fetch and send emails through O365 as a computer — not a human that could perform multi-factor auth. I will discuss the basic layout of the way that I solved the issue, point out some pitfalls that arise because of the University’s bureaucracy and hierarchical structure, and offer some tips for others trying to write middleware in the trenches.
My audience will understand this better if they know the underlying mechanisms of emails and how they’re sent or accessed, as well as Office 365 authentication mechanisms.
The key takeaways are that writing middleware or API layers at the University can be quite difficult and challenging, but given the right circumstances, also successful. Another thing to learn is that challenges are almost never purely technical in nature, a lot of obstacles that come up are going to include tasks that take people-knowledge and social skill in the workplace. An appreciation for the wide variety of business needs among campus IT will be emphasized strongly.
In this presentation, I will share how our team created a Person API to improve data integrations with core identity data. Using a technique called contract-first API development, our team was able to show what the API would look like from a user perspective first, which then influenced the technical implementation “under the hood”. I’ll also cover why we chose to create an API, and the benefits behind using APIs for data integrations.
Attendees should have a basic awareness or understanding of APIs would be preferred, but isn’t required. I plan to briefly define an API as a basis for the presentation.
Attendees will learn
– Knowledge of what an API is, and why someone would consume or create one.
– Understanding of APIs compared to how most integrations are implemented at UW-Madison, and why one might prefer an API approach.
– Basic knowledge of API design, and applying UX design techniques when creating a specification for an API to make sure it addresses user needs.
– Understanding of what the Person API is, and a basic understanding of how to get access and use it.
A few quick tips and tricks on Tableau software that solve small, but important issues. Some tips include:
– How to use Index() to solve disappearing repeating items in a table
– Adding tables to tooltips
– Partial average
– Custom sort by multiple fields
– Adding title to measure columns
Familiarity with Tableau is helpful but not necessary.
– How Index affects the visualization.
– There are many workarounds to achieve a goal.
Protecting the privacy of data is an emerging topic that impacts how IT professionals collect, store, present, and dispose of data. But what is really meant by data privacy? Why is it important? This session will identify the importance of privacy for our society and culture, what our responsibilities are for complying with privacy laws, regulations, and standards, and how to present information, such as privacy policies in meaningful ways.
Interactive data visualizations let users choose which aspects of the data they want to display. This hands-on workshop will cover using Python and Streamlit to create interactive graphs where the user can change what data is being displayed. To follow along with the workshop, it will help to be familiar with Python and the Pandas package.
To follow along with the workshop, it will help to be familiar with python and the pandas package.
Learn how to create and modify an interactive plot using Python and Streamlit.
*Captions have been auto-generated via YouTube. We are actively working to edit these. Please check back if you need captions.
DoIT’s Web Platform/Services (WPS) team has brought together other campus partners (Captain Planet style) to gather data related to campus website needs in hopes of providing users with better service support and features.
Over the past 6 months, we’ve collaborated on two major efforts:
A quantitative discovery to determine how many public, hostable sites exist on campus.
A qualitative discovery involving a survey of campus website editors to better understand their wants and needs on campus.
Join us for a deep dive into our process and what we’ve discovered so far! Afterwards, let’s jump high-five (just kidding, this will be remote) and collaborate on where to go from here.
In this hands-on session, attendees will build a filterable point map and text table using the free and user-friendly software, Google Data Studio. No geocoding or mapping experience is required! Attendees will learn the benefits and limitations of using Google Data Studio as a mapping tool.
Participants will need access to a computer and their Google account. Cleaned data will be provided to registrants prior to the session. A basic understanding of connecting a Google Data Studio report to a Google Sheet datasource and of Google Data Studio tools, functions, layout options and sharing options is recommended.
Prepare Before the Session!
Thanks for your interest in the Map It! Session at the 2021 IT Professionals Conference. To get the most out of our time together, you might want to do a little bit of prep work before our live session so you don’t fall behind from the beginning of the demo. There are two things we need you to do:
Make a copy of the data. We’re using a Google Sheet of Dane County Licensed and Certified Daycares. You should be logged in to the same account you want to use for the session (we suggest your @wisc.edu account), and then create a copy of the Google sheet.
Sign in to Google Data Studio. You don’t need to do anything else in Google Data Studio other than logging in. But the first time you log in, you’ll need to go through several steps. Completing these ahead of time will streamline the session.
Based on the Incerto series of books by Lebanese-American author Nassim Taleb this session discusses the properties of a normal distribution and how it is a useful tool for predicting and planning in the world of “Mediocristan” the world of coin flips and human height. However, it is easy to confuse that world with the fat-tailed distributions of “Extremistan” the world of wealth and app sales which make things fragile to black swan events. Next, we’ll introduce a spectrum; on one end is fragility; things that are weak to volatility (teacups, highly specialized tools). In the middle is robustness; things indifferent to volatility (a stone, T-bills). On the other end would be antifragility; things that gain from disorder (muscles, technological innovations). We’ll end with a discussion of some tools to help identify and leverage antifragile systems.