Future of Tech | David Sheehy | Workday

David’s journey into technology began in secondary school, where a career guidance counsellor encouraged him to try a simple CodeAcademy “build your own website” course. That summer project sparked a passion for problem-solving, instant feedback, and understanding how technology actually works. He went on to study Computing at the National College of Ireland, specialising in data analytics, and gained pivotal industry experience during his third-year internship at Workday, where agile development and real-world collaboration brought his skills to life. Graduating at the onset of COVID, David started his career in the UK, embracing the fully remote tech world from day one. He now works as a Software Development Engineer at Workday and is also pursuing a Master’s in AI at the University of Limerick.

Have you worked on any projects over the last couple of years that you thought could make a tangible impact?

Over the past few years, I have worked on two big climate and sustainability-focused projects that felt genuinely impactful.

My first was at a company called Immersive Edge, where we built simulation software for business lessons, which allowed us to run a simulated company over 52 weeks in just an hour, working in teams, making decisions, and learning from the results. Initially, it focused purely on financial performance. But in 2020, as governments began introducing regulations around emissions reporting, we decided to add a climate element.

I had to quickly get up to speed on scope emissions, something completely outside my technical background, by speaking with experts in the field and understanding how they measured them. We then built this into the simulation, so decisions had both financial and climate consequences. A move might be profitable, but if emissions soared, there could be financial penalties or reputational risks.

The process was deliberately challenging. Adding sustainability elements into the mix made it far more dynamic and realistic.

That product, called Go Green, became one of the company’s most successful offerings.

My second major project was with Minerva Intelligence in Canada. While the company originally focused on mining data, one of its offshoots was climate risk data. For example, they had GIS datasets covering all of Canada that could predict the probability of wildfires or extreme heat events. These predictions were crucial as wildfires are a huge problem in western Canada, and extreme heat events have serious implications for public health planning. Our main clients were realtors, who used the data to assess the likelihood of property damage. The tech was so precise that, with just a postcode, we could identify the relative risk for that area.

While I didn’t create the climate datasets themselves, those came from brilliant climate scientists with PHDs, my role was to make them usable. The raw data was in GIS formats like GeoTIFFs, which are highly specialised and inaccessible to most people without the right expertise. I built full-stack systems that transformed those datasets into tools clients could actually interact with.

What do you think is technology’s role in addressing climate change?

I think technology is incredibly valuable in distilling very large data sets into something consumable, understandable, and actionable by humans. We deal with so many huge data sets, from satellites, from global imagery, from weather patterns, from industrial outputs, from government – there are endless sources. The real benefit of AI, in my view, is that it can distil those huge data sets into actionable insights. We can identify things we wouldn’t have otherwise, like where we can make decisions to reduce emissions, plan ahead, and predict where emissions are going. Then we can focus our investment where it has the highest impact. However, there are drawbacks to using AI and storing large datasets, such as data centres using an astronomical amount of energy.

Do you think greenwashing exists in the tech industry?

I think there are good intentions, but technology organisations are so large that they can’ be treated as a single entity; there are different teams with different priorities. Tech companies can’t do it on their own; they need people who understand climate economics, government policy, and resource allocation.

It’s not any one group’s fault, and I don’t think it’s fair to label all of it as greenwashing. Some companies are better than others. Transparency is key. For example, the Central Statistics Office (CSO) reported that data centres consumed 22% of the nation’s electricity in 2024. Being transparent about these figures and then devising solutions, such as how to grow the economy while maintaining sustainability, is what’s needed.

Should AI’s growing energy demands and environmental impact be regulated, and if so, how?

Data centres use vast amounts of energy, but that impact depends on the source. In places like Ireland, with some of the best wind energy potential in the world, investing in renewables can make data centres far less damaging.

There’s definitely a balancing act required here. Globally, approaches vary: the EU has gone heavy on regulation with the AI Act, the US is more free-market, and China is very tightly controlled

Looking back at social media in the mid-2000s, it launched with virtually no regulation. Only now are we catching up, and I would argue we still haven’t fully caught up in understanding how algorithms target people, affect vulnerable groups, or shape behaviour. AI could follow the same path if we are not careful. Hence, I think it’s better to regulate early, collaboratively, and transparently, bringing stakeholders in rather than dictating rules from above.

It’s a fine balance: protect society and the environment, but don’t choke off the opportunities AI brings. I don’t have all the answers, but I think regulation, done right, is essential.

Do you think we are currently experiencing an AI hype?

Yes, especially through how we consume information on social media, company press releases, and by individuals. There’s a lot of hype from various sources.

AI has huge potential if we use it right. For example, in my industry at the moment, there’s this belief that junior developers can be replaced by AI. In tech teams now, there’s a risk of neglecting the value of both a new, innovative view from a junior developer and the experience a senior developer brings. I think these two perspectives and skill mixes work well together. If the team consists only of senior developers, the best outcomes won’t be achieved; a mix of junior and senior developers is needed to leverage the best of both.

I firmly believe that junior developers should not be replaced. I have been using AI in my coding for the last few years, and it’s great as an assistive tool. It’s great for automating certain tasks, and everyone has parts of their job that can be automated, but that doesn’t make their job redundant. It simply means they can allocate their time to more valuable tasks. That balancing act of youth and experience absolutely has a place in technology now, and it will continue to do so going forward.

Do you use AI tools? Do you think they pose risks?

I use Large Language Models (LLMs) a lot, and I can see why some people use them as the new Google, especially for asking the “dumb” questions. Where I think AI tools shine is in education. In college, I spent hours slogging through a programming course to learn a new language. Now, I can ask AI directly, in plain language, and get there much faster. It’s especially helpful for questions you might not want to ask in a lecture hall or in a forum where you might never get a reply.

That being said, critical thinking remains essential

There’s also the risk of “cognitive offloading”: students just asking AI to do their homework and submitting it without learning anything. That completely defeats the purpose of education.

You can’t ignore AI in education; it’s here to stay. The challenge is how to integrate it responsibly. One approach is creating instances of AI models with specific prompting, so they act like teaching assistants. They could adapt to learning styles, for example, giving a visual learner lots of diagrams, and help guide someone to the answer rather than just giving it.

It’s still a new technology, so getting it right will take time and collaboration. If we manage it well, AI can genuinely transform how we learn.

There is evidence that platforms like YouTube or TikTok tend to push more controversial topics because that’s what gets clicks. Do you think there needs to be tighter controls within tech organisations to get more control of their algorithms, so that more extremist content isn’t being pushed?

Absolutely. This is something that’s the same for LLMs in AI as well. It can’t just be a black box solution. These algorithms need to be transparent. Yes, there are intellectual property concerns, but that doesn’t mean companies can’t be transparent with governments and the public about how they use the data, particularly ensuring it’s not just spreading disinformation or purely chasing engagement.

I think that’s the big problem on social media platforms. Their focus is on engagement, not on whether something is true or factual. Fact-checking doesn’t really come into it. My parents’ generation watched the six o’clock news or read the paper to get their news. Now, people get the latest headlines from Instagram or news apps.

Data privacy is a huge concern, particularly regarding how we consume information on political or controversial topics

It’s important to ensure people’s data isn’t weaponised for one side or another, even on something as trivial as a colour preference for red or orange. That’s a simple example, but it illustrates the point that data shouldn’t be used to manipulate or polarise.

What do you think technology as a whole has given your generation, and what has it taken away?

It’s given access to information that would have been unimaginable twenty years ago. Studying for a history project twenty years ago meant going into the local library, trying to find the right book, maybe waiting for it to be ordered in. Nowadays, we can access what we need at just the click of a button. That’s fantastic, and it’s a real enabler. In my Master’s, I am examining how to enable children from more disadvantaged backgrounds to have the same access as those from more prosperous socio-economic backgrounds.

The flip side is that the same access extends to misinformation. It’s much easier now to falsify a video of an event, maybe the footage is old, or from a completely different location, and the context is missing, especially in a short 7-second clip. There’s no nuance in the information you get, so you don’t know what to believe. This leads to disillusionment, where anything put out can be dismissed as fake. Genuine, important information can be neglected in favour of whatever your echo chamber wants at that time.

What is your vision of the Future of Tech?

In terms of my current passion for technology, I am particularly interested in exploring education. The tech tools that are emerging now, compared to when I was in school, are incredibly helpful for learning. If we can grow those tools, I would love to see education and how we learn transformed.

I think rote memorisation is in the past. We need to focus on critical thinking, how to critically analyse information, both as we grow up and throughout our professional careers. That’s where I see tech having a huge positive impact.

Lastly, I think the balance between regulation and innovation will always be a challenge, not just now, but for the next hundred years of tech.

Check out all of the interviews in our Future of Tech series here>>> , listen on Spotify>>>> or watch the videos on our LinkedIn page>>>

At Barden we invest our resources to bring you the very best insights on all things to do with your professional future. Got a topic you would like us to research? Got an insight you would like us to share with our audience? Drop us a note to hello@barden.ie and we will take it from there. Easy.