A Human Approach to AI Development and Upskilling
Dave Erickson 0:00
So you think AI is too robotic and machine-like; Hell, you are starting to think that about yourself. How can you make AI more human before it makes you more of a robot? On this ScreamingBox podcast, we are going to discuss a human approach to AI development and upskilling, which you can use to build better products and a better you. Please like our podcast and subscribe to our channel to get notified when the next podcast is released.
Dave Erickson 0:48
AI, this and AI that, everyone wants. Ai, but at what cost? How can you really develop products with AI, and how do you develop yourself with AI? Welcome to the screambox technology and business rundown podcast. In this podcast, Botond Seres and I, Dave Erickson, are going to discuss a human approach to AI development and upskilling with Bala mutaya, director of engineering at Lyft. Please note that Bala's ideas and thoughts presented on this podcast are just his personal views and not that of Lyft. Bala is a people first technology leader who brings a rare blend of engineering, depth, strategic insight and a genuine passion for helping others to grow. His career spans impactful roles across the Silicon Valley ecosystem, most recently at Lyft, where he has contributed to complex, high scale systems while championing collaborative, Human Centered Leadership. Bala has also spent years expanding access to technology opportunities by volunteering as a computer science teacher for elementary students, mentoring aspiring engineers through organizations like code path and supporting entrepreneurial programs such as NFTE and Defy Ventures. His work reflects a belief that technology is most powerful when it lifts people up, and that great leaders are those who help others find their potential. Bala, welcome to the podcast.
Bala Muthiah 2:09
Thank you. Thank you for a great introduction. Great to be here.
Dave Erickson 2:14
To begin, let's talk about how you got involved with technology in the first place.
Bala Muthiah 2:18
Yeah, it's not very I would say, super exciting move, turn of the round story coming up. Growing up in India, education was really a big thing. It's part of the culture, part of the community, and that's the thing that takes you to the next level, right? And that's how my journey started, too. I should say my mom was the one who inspired me to get into engineering and technology. That's how my journey started. So went to engineering school, graduated, and then started working in a small company in India, and then that brought me to us. My journey to tech or engineering has always been the curiosity and the inspiration that my mom showed.
Dave Erickson 3:01
So, in a sense, family gave you a good background for becoming an engineer.
Bala Muthiah 3:06
Yes, family definitely gave me a lot of inspiration and enabled us. We were not really well off. We were struggling to meet ends, but their goal was, regardless, education was key, and they have been fostering us with that kind of mindset.
Dave Erickson 3:23
And for your engineering journey, you got your first kind of job. I assume you started as, what a developer?
Bala Muthiah 3:30
Yes, engineer, developer.
Dave Erickson 3:33
And when you were looking kind of when you began as an engineer, there wasn't a lot of AI at the time. But what were you kind of thinking as a, I don't know, a career path or engineering path that you wanted to take to grow as a person. How are you thinking back then, where you wanted to be in the future?
Bala Muthiah 3:56
Yeah, back then, AI was definitely not a thing within, I would say the mass reach, right? Tech, the concepts were existing, things were there, papers were there, but it didn't reach the audience that it has done now. So when I started, it was, that's when Web 2.01, the whole front end was developing. So I naturally got into that because I'm always a big fan of design and how things work and how humans interact with the human aspect of the interface between the human and the machine. That really hooked me, and I started as a web developer right building interfaces, building small applications that are hosted on the web. That's when my Firefox was coming up, like this. All revolution Chrome was coming up. That's my journey, how it started. And then from there it went to different parts, go deeper into back end and mobile, whatever we not but web, the human point where the human touches the computer like that's my starting point. It. And till date, I think that's the most fascinating aspect for me. Everything has changed, but the interaction. And how do you make great things out of that interaction is what keeps me in tech.
Botond Seres 5:12
Yes, it's, it's a very profound thing to think of AI as, as part of the human machine interface, in my opinion, it's maybe the latest iteration. And speaking of AI, how do you define a human centered approach to AI development, and why does it matter? Now more than ever, inside large engineering organizations.
Bala Muthiah 5:43
Yeah, I mean the last part of your question, right? Why does it matter more than ever? Like, that's where I want to start with, because now I feel we have reached a point pretty much like in every innovation turnaround, right? People talk about cars, people talk about electricity, all the way back. So those are really pivotal moments. And I think for AI, this is one of those times, because this is that moment where I can, we can say that pretty much everyone, or like more than half of the population, have access to it in one form or the other, either directly or indirectly. And then people who are actually making and building these systems and softwares and products, pretty much 80% plus or 90% the engineers, the technologists, are having access now. The important thing is, there is excitement. There's a lot of opportunities. There are dreamers who are really building and painting big futures for us at the same time. Why are we doing all of these things? If you come back to, the why it comes to that question, humans, right? We are doing this for ourselves, for the human that are going to come next, generation and generation after. So that's why it's important to keep that in the front and center. Things will change, right? How we build, like two years ago, what we were using to build is different from what we are building now, and the rapid change of innovation is happening. One thing is that has not changed. Is everything that is coming out as a tool or as an option, it's always anchoring on the human aspect. That's why it's important to know and how we go about and how do we create value? How do we create what we want? It's going to be great, as people are promising, if we do not forget about the human aspect.
Botond Seres 7:33
Right. I mean, it's incredibly important that we don't forget about the human aspects, especially as we tend to sometimes to be honest with AI, especially with how many companies are restructuring to replace workers with AI, instead of enhancing their work with AI. And I really wonder what your thoughts are on, on, on either using AI to replace some fraction of the workforce or using AI to enhance the workforce that is already there.
Bala Muthiah 8:12
Yeah, I'm more on the optimistic side, where we can do more instead of trying to reduce. Let's say we want to achieve 10 things, and you have 10 people to do that, yes. One way to look at it is, if I have seven people, I can still do the 10 things, if AI is used well, right? The other way to look at is, if I still keep the 10 people, I can do probably 15 things, or maybe 20 things. No one wants less, right? If I go back to primitive of cells like we all started as a single cell organisms like we, wanted more, no one wanted less. No one was contented to say, oh, one cell is good. It's a human psychology. It's actually primitive more than it's the biology of us. We always want to do more, create more value, create more out of everything we have. So just following that, doing more out of AI is the camp where I am on, and that's where I believe the world will go. Yes, there is temporary gain by cutting short of, let's say, people in an organization, but it's gonna only give you an very, very short lived relief. And you are actually putting yourself in a disadvantage, and you are competing with your competitors, where someone else is doing more with the 10 people, but you gained by reducing three people and having a seven people team, but you are, in a way, giving up to that other person who is unlocking a lot with the 10 people. So wanting more has been our innate nature, and that will continue. Maybe there's media hype, there are market expectations. People might get like a dopamine when. You say, Oh, I can do this with just seven people, but that will not last, and you will come back and you would want to do more with more people.
Dave Erickson 10:08
You know, there are going to be some people who decide that for them, AI, isn't what they're interested in, and they're not going to add it to their career knowledge or what they're doing. And then there's going to be those people who use the tools and become familiar with AI tools, and they're the ones who probably will have more secure working environments. There is another aspect of this, and that is, you know, when AI starts getting combined with robotics, yeah, there is an opportunity to replace human labor with robotic labor. But then the question becomes, is that beneficial for society? And can those people who are displaced be elevated into a different career, into different work that is actually better for them, right? And I think society, government and companies are going to have to figure out how to do that. And I think that's the real challenge of AI, is, is more on that higher level, but on a more focused level, people are integrating AI into products. I think one of the humanistic sides of AI, and this has been with the, you know, generative AI like chat GPT, humans actually find that there is a human way to interact with data, and by asking questions in normal human language and getting answers back from the AI that sound like real person, right? And I think that's facilitated the use of that level of AI. But in the tech world, that,that generative AI is a small part of what AI really is. So the question, let's say I'm a developer, and I want to start looking at, how do I bring AI into the products that I build, what are some of the things that you see when it comes to what developers should be doing to integrate AI into their product development, and where do you see AI going as far as integration into product development?
Bala Muthiah 12:17
Yeah, great question. I, it is also very essential question for this time, right? This is where actually the transformation is happening. People are migrating from not using the tools to using the tools. There are camps that are like people are skeptic, rightfully so on the value it provides. So if someone is starting now, like, where do they start? I could think of it in two buckets, right? One in terms of helping you to be a better engineer, right? So the key thing to look at is being a better engineer, right? Like not offloading everything. Yes, it's really exciting to have a one shot prompt to build a whole application for you, but you are essentially just asking it to do something versus doing something the engineer, the core DNA of a developer or any engineer, is building things. That's what they get excited about. So thinking about, how can I build things in a better way, much more efficient, much more complex, much more fun with AI is one way to think. The next thing is coming back to the why, like, why are you building something? What is the reason? Who is your customer? Who are you doing this for? Can they benefit if you augment what you're building with AI as part of that? So one is using AI to code itself. The next one is using AI in the thing that you are building. So those two streams people can look at on the second one right, like using AI in the product. Oftentimes, it may not be entirely up to an engineer to decide what kind of things they are building, especially if someone is starting they are most likely going to be told on what you want to build, unless you go have your own startup, right? It's a different story. You are going to be part of a team or organization. It has goals, product roadmap, and either it might or might not have AI, right? And what is different is in your control. Is what to build that's completely in your control. Like most of the companies these days, offer tools that could be embedded into a developer lifecycle, either to code, either to test, either to verify something or come up with different options. If you are in a place where, hey, I don't have access to these tools, right for my company, for some reason, it's not opened it yet. You can still do it on the one step behind, which is thinking process. There is a problem you're trying to solve, you have a solution. Usually, you do this with whiteboard, right? You come up with the problem, and then you have two different options. You do the pro. And cons, this is where you can engage AI, right? The human chat form you mentioned, you can interact with it as if it's another senior engineer or a developer or a lead who has had experience in this brainstorm, and see if you and collaboratively with the AI chat bot, can poke holes in the design you have. Are there any downsides, things that you are not seeing today that might come up when you when the system scales security? Quite a few options. So you can brainstorm and purely use that idea and then go code. So there are various stages they can plug in, depending upon where you are on the curve. Everything is on the table for you to grab.
Botond Seres 15:43
Right now, if you don't mind, I would like to take a small step back from talking about AI and talk a bit more about innovation and how we can balance it with AI. Since you have experience operating well, let's say systems at a massive scale with real time and safety critical systems, how you are able to balance the innovation in AI with the responsibility to maintain reliability, trust and user safety. In my personal experience, AI is not the best at these things, so I would be very happy to hear your thoughts on this.
Bala Muthiah 16:32
Yes, no, you are very spot on, right? AI is not really good at these things that human can catch or a human can empathize and understand, like, if you even leave systems right, not even talking about software, talking about teams, like, what is a high functioning team? One of the things which we look for a successful high functioning team is a diverse team. People should have different perspectives, different opinions, different thought processes. When you're dealing with AI, you might not get it out of the box directly, because it is trained on certain models. It is trained on certain things. So it's going to be pretty narrow minded. You need to really poke it to think differently, right? Kind of try to challenge it in a way it doesn't give you out of the box cookie cutting answer, right? Which, if you ask again, it will give you a different answer for the same question. So that's, that's the downside of it, coming back, leaving aside, coming back to innovation. One of the things, if you look at any invention, and there's the saying, necessity is the mother of invention, right? You innovate when you are constrained. You innovate when there are no other options out there. They will come up with a new way of doing things. Innovation itself is new way of doing things with existing limitations, with AI in a way, it has made it harder because there are no limitations. Now you can just ask it to do something, and it will just blurb out a lot of code for you, with or without making any sense. It'll just throw it at you, and you are, in a way, losing the innovation edge. Right? It's not something that you can go understand and challenge and push the boundaries. Every time someone has pushed the boundaries, that's when innovative things have happened. So understanding that and then tying that to the non functional aspects, right, security, privacy, even other aspects that we forget. AI systems are not designed for that, but just purely using it as a tool at that point would be valuable for you, like, if you're coming up with a solution, then thinking about, again, what am I missing? What is not here from a security angle, what is under risk here? So applying those lenses will really help, but purely relying on AI in itself to solve these problems will lead us to a place where we do not want to be. We'll have a lot of things created, but nothing will be of value. Then you will feel like, oh, what's happening? This is like in the consumption industry, right back in the day, when you only had a TV with a cable you only had four options, but now you have 10 different streaming platforms. You don't know what to choose, and then you end up not watching, versus searching for 30 minutes and walking away. So same with AI like, it's going to give you every single thing, but then use it as a tool, versus fully relying on earning it to like be your driver of the chart of the solutions.
Dave Erickson 19:37
Well, never underestimate human laziness, and that seems to be where a majority of people who are using AI are coming from. And you can see that in development in all kinds of aspects. But I think that also is coming from again, people are interfacing with it and they say, Oh, it feels. Like I'm just interfacing with a person, and so they're lulled into this idea that it has the same emotions or thinking or whatever that that a human has when it doesn't, right, it is based on a Limited model. And that's where the innovation and creativity, I think, becomes really important, because all a lot of these AI systems are using a data set, and it's either the data set of the internet or it's a specific data set that you fed it, and it's just reorganizing and rehashing the data that it has to work with, versus actually creating or innovating. But it well, you know, to, to many people, if they ask a question because they're working in an area they're not familiar with, it sounds like it's thinking, right? And you know, the fear is, is that if everyone's using it just to do that, and nobody's using it to do it real creation, eventually, it's just going to run out of ideas because of rehashing the same data over and over, and that could be the same in a company who's fed at a data set. If it's not updating that data set consistently or putting new data in, then whatever the AI is doing is going to stagnate as a development tool, yes, it's great. If you say, please write me a login code. Because there's so much content out there about login codes. Who needs to really create a new login code? It's pretty obvious what a login script would look like, but I think that's where innovation in development is going to meet its challenge. You're going to have developers who are just going to ask it to blindly do something, and then it's really going to be the developers who are focused on, how do I take whatever the AI has done and add to it to make it something new and creative? What's your feeling on the idea that developers need to still code by hand in order to give AI new things to look at and incorporate into their code.
Bala Muthiah 22:04
Yeah, now this is very true, right? It's a common phenomenon we are seeing in the industry today, right? You walk into any company, you're pretty much gonna see this in one form or the other, right? Having developers still writing code, like one of the things when I started my career, back in the days, we used to use Eclipse as an IDE, this is Java based and stuff. And when I was fresher, when I started the lead in my team in a probably half jokingly, would not allow us to use the debugger that is inbuilt within Eclipse, because he would be like, You need to go and understand the code and see what it does, versus completely relying on a debugger to go to the next level and understand what it is doing to us. It was like, why are you not letting us use it? It's there. It gives me everything. Why do you want to do but then taking a step back, right in a way, it made sense. It's not a strict enforcement but his concept is you will lose the ability to think through if you are fully relying on something. So that same thing applies here to an extent. This is where leadership really matters, right? We can get short term gains if you are measuring success of your organization by the number of PRs that they are creating, number of code they are generating. You know, back in the days, this is probably even before even I started coding, there used to be this kilo lines of code as a measurement. They used to measure how many codes you are actually churning as an individual developer, and your performance rated based on that. In a way, we have come to that indirectly. I've seen OKRs that are being set within companies across where number of PRs generated by AI, even you can see controversially big CEOs coming and saying in earnings call, oh, 70% of our code is written by AI. So that begs the question the value add, and what did you really unlock if 70% is created by AI, right? So going there is that's where, again, leaders role, give the right metrics to measure right, ask your people the right question. Don't ask them or assess them based on how much the output, then they are going to go output more code. Just give a one prompt and get a bunch of things generated. If you ask the right value, like, what is the quality of the output, how robust it is, how it is sustained, those are the right questions. Then AI usage might not become a thing. It's okay to adopt. You can have some goals to make everyone start using because earlier we're talking about, hey, not really using much things like that. But once you do that, keep your goals clear, like understand again the human Why are you doing this? Every system is built for some human behind. Understand what a person needs and then don't give metrics which are purely measured without value.
Botond Seres 25:09
Couldn't agree more, Kilo lines of codes generated with the new addition of AI has become somewhat irrelevant. I would argue it always has been, but measuring outcome is much more difficult than measuring lines of code.
Dave Erickson 25:27
Yeah, I'd hate I'd hate it. I'd hate to get into the situation that some schools are in where basically their the teachers are using AI to grade papers and the students are using AI to write the papers. So basically, you're having AI do the writing and the grading. You know, I can't tell you how many I've seen of that situation, even in development, I think that the sooner engineering teams start focusing on using AI as more of a productivity tool to do some of the work instead of trying to do all the work with it. I think they'll get much better results. For you, You have a huge engineering team and a direction on not just one level, but on probably many different levels. If you could talk to the AI makers directly and the people who are designing the AI systems, what would you say to them you would want as a development team manager, like, what AI? What would you like AI to do for your developers, versus what it's currently doing?
Bala Muthiah 26:41
Oh, great question. I know Christmas has gone, but I still have my wish list, so it's more like on the Coming Soon stuff. So one of the things, actually, I would say two, right? One is definitely, you know, if you are AI company and you are building products to sell, like, that's pretty much your whole goal and vision and stuff, but stop selling false dreams, right? And then, like, there I see recent, I think it was a couple months ago. I was watching somebody from one of the big companies, do they were, there was a question about, hey, if people are building tools or creating things with AI and then not understanding how it works, and then when something breaks, How will an engineer debug or develop? So this person's response was, you don't have to debug. You can rebuild everything, making sure that one thing works again, right? So if you look at it from abstract it looks like, wow, this is a good thing. But then if you ask the question, Why is this person saying this is because, of course, if you kill everything and rebuild it's going to give them more money, right? You're using their tokens, you're using their infrastructure, they are going to get rich. (Botond: I never thought of that) So like the whole idea of, Oh, stop debugging, don't worry about fixing a bug, you can rebuild the whole thing in a prompt in quicker and faster way than you can actually debug, right? That's a false thing to propagate, and it's going to hurt in the long run, more, right? You can do that, but there's a limit to that, and there are sensitive things which, if we cannot debug and try to rebuild, there are consequences. So that's one stop selling nightmares in form of dreams. That's not true, but actually coming more tactically, right? One of the things, which still, I feel on a day to day gonna matter more you talked about a little bit earlier too about this, which is the whole experience appears as if I'm interacting with someone who's more intelligent, who knows it all, who has all the answers. That's the user experience we have created for these chat interfaces, right? I think we can tone it down a little bit to make it more realistic, especially in the coding environments, right? Yes, it's good if you are having a customer care agent, right, more empathetic understanding everything about you. But on a coding level, you don't need emotions while you're coding right. You have to really tell what you know, what you do not know, and have an ability to clearly identify which part is hallucinated, which part is like, I would ideally like if you go into one feature I would want is, if you go into how LLMs work, there is a confidence score. It's all going to be some probability. If they can publish the confidence score. Hey, this answer has this much confidence in reality. And this is all like, rest is all I can think of. It's like made up for hallucinator. That will give a good idea, right? Probably a color. Progress part to say green is highly confident, and then you go into orange, yellow, red. Then, okay, I know at this point I'm probably making some of you can go figure it out.
Botond Seres 29:52
Pretty sure I saw people that did exactly that with an LLM, which colored the parts of the text, different colors based on the confidence values.
Dave Erickson 30:04
That comes back to, you know, the fact that the math that LLMsms are based on is, you know, more of a binary math, because it can't have the engine or intuition to work with gray. It's for that it's black or white, and that's an algebra problem, and they, I know they're working on it, and maybe in six months, six years, they'll figure out the math, and the LLMs will have more ability to work with gray, and that will improve hallucinations and other things, but it's not going to address the root or the basic, I guess, illusion or delusion that humans have about AI, which is that it can create in that it is conscious, yes, yes. And I think that's a trap that AI companies don't talk about enough with what their products are. They just talk about how wonderful the products are versus Well, here's the limitation, you have to do this much thinking. And again, I come back to, you know, humans are wonderful at being lazy. They don't necessarily want to think that that well, but I think it is critical for, definitely, for product development that you have to really put in the thought work behind it. You can't let the AI try to do it.
Bala Muthiah 31:29
Yes, yes. 100% 100% we can see a lot of slop being created now across like every field you name.
Botond Seres 31:36
Yeah, it's the word of the year. I think is the word of the word of the year of 2025.
Dave Erickson 31:43
If you are looking at AI tools now, what kind of tools are you mostly focusing on for engineering?
Bala Muthiah 31:53
Think, I'm thinking this more as like ,a pyramid, right? Like the basic stuff, which is an IDE that's where you the actual engineer or developer makes the magic happen, right? They are interfacing. Everything is there, available for them. They know the code base, they understand the system. So that's the foundation. There are quite a few tools we know, like from curler to copilot, and like, there are new things coming up pretty much every now and then, and that's the fundamental stuff. What are the next, the next layer is, okay, now that's in the end. Like, I'm actually gonna put my finger on the keyboard to code, but we don't have a lot of tools before, like the brainstorming we talked about, like, it ends up being like an hallucinator, intelligent agent, as if it's telling me great things, like understanding the problem space, understanding the context. How do you get everything in so it can be a better thought partner for me?here's not much in that space today. Of course, cloud code is there and they, yesterday. I think they launched cloud work like quite a few things are happening there, like co-work, but it's fully not there. It is still, I think the brainstorming angle is still missing. Some of these tools do a good job to a certain extent. Then now you build something, you had an idea. You build something testing. So testing like we're talking about thinking as a concept, right? There was a thing I saw recently. Hey, how do you make your product into an AI ready product? Like, if you change the loading text into thinking, then it becomes AI, pretty much. That's what's happening. What was loading to take time to give you answer. Now they convert into thinking, and all of a sudden you think there's some intelligence behind, so the testing tools are in that zone where I feel more innovation is needed. So testing is one of the times that most of the engineers are not excited about. They want to create things and they don't want to go test it. I've done this so many times, like shipped code into production and then test it there. So we test it in production, not before. I wish there are more tools which will simulate and get things, scenario planning, like so many different things that they can do. So that's great and now with agentic AI, of course, people are coming up with different steps, but again, this agent washing on the other side everything is an agent, which was once a script is now called an agent, so those area testing areas are not there. I wish there are more innovations coming there.
Botond Seres 34:29
Yes, we could have 1000s of agents testing, yes, the same part of the code, all with their random approach doing everything users can think of using everything as incorrectly as possible, that, that would be great. But go ahead, Dave.
Dave Erickson 34:44
I remember on our, one of our podcasts, we had Uncle Bob, who is a very famous clean coder, and he brought up, you know, why is clean code or thoughtful code. He was talking more about thoughtful code, so important. And when you think about it, cars, a lot of the coding in cars for safety features, especially nowadays, blind warning spots using the LIDAR to detect distance and to warn people if that code isn't done well and tested well, people's lives are at stake, or you're writing the autopilot code for an airplane, right? And I'm sure in the work that you're doing that that concept of the code is really the last defense between a harmful incident and the safety of a person. If AI is doing everything, or if, even if the people write the code and AI is testing the code, do you really have the confidence that that code is not going to put somebody in danger? And I think that, that that question is a really important question for managers and product developers, is that correct?
Bala Muthiah 36:06
Yes, 100% right. Like, even the human aspect is very important as part of the process, because we are building it for the humans. So there should be a human in the room to build and test and verify and give the final seal of approval. Of course, these humans can use these tools to help, like one of the biggest you talked about autopilot right in airline industry till date, even though pretty much your entire flight journey could be on autopilot, landing and takeoff is still done by manual. There is no I believe, if I'm not wrong, there's no allowed regulation or law that is exempting this thing, like you cannot do a landing and take off autopilot. You have to have a manual, (Botond: in emergencies, you can go in emergencies), yes, but in a pilot state, they don't engage because of the chances of what you are putting through and the risk. And this risk is not necessarily physical all the time, right? Like, there are some other things too, like data, how confident your data is secure? Like, what if data goes in the wrong hands? Like these AI tools are also in the wrong hands, right? So it could be put to misuse, abuse and very different way, and can impact people's life. Think about someone's Social Security getting exposed, and what kind of consequences it can have. Are so many other things, health data. So there are things that are dangerous that unless a human is there to understand and empathize the impact, it cannot like a it cannot empathize like also, sometimes we keep talking about a as an entity, like it's a piece of code, so you cannot expect it to do anything other than it being a piece of code.
Botond Seres 37:49
Putting LLMs aside for a moment, since we are already talking about other expert systems, like autopilot, for example, I was wondering, what are some of your favorite examples of other types of AIs or expert systems. I'll start, my favorite one is the one that just keeps folding proteins to find cures for different types of cancers.
Bala Muthiah 38:15
Yeah, in general, I think health is a great area where there's large amounts of data, it will be humanly hard to go and process and assess so it can accelerate, right? I always think of AI as an agent you should use to accelerate, not to fully do. It can speed up your work, but it should not do your work. So in that friend, a very simple thing, right? Like, even just purely, I know you wanted to stay away from LLM but, like, there are little things when I go to a doctor, right? They are using transcribe device now. It gives me a good sense, because the doctor is with me instead of typing the notes on the keyboard while I'm talking to them, right? Oh, right. Many of those
Botond Seres 38:57
Oh yeah that’s different than LLMs, text to speech and speech Yes, but
Bala Muthiah 39:01
I'm just saying, like, what kind of thing is augmenting? Like, those are little things. It really can change your experience, and some of the manual, paperwork, like stuff like that, those innovations are really, really,
Botond Seres 39:12
oh yes. I was just before this recording. I was gonna read the email Dave sent me, and we have a summarize option in Gmail. Yes, we don't have a text to speech option like, why?
Bala Muthiah 39:27
Yes, I guess it's, they want to charge you more money to get that to you, even though technically, it's not going to be very different.
Dave Erickson 39:36
I do think one of the other things, and we've done several healthcare projects using AI, that AI is good at is taking very large data sets and running prediction analysis with them. And this is, this is important in healthcare, but also in healthcare we use it for helping to speed Up documentation processes, but that can be applied to any organization. But I do think that that there are a lot when combining machine learning with LLMs and large data sets, you can get prediction opportunities or prediction runs that are very close to what actually turns out to be in some ways, a little bit better than human because the LLM and the machine learning can incorporate so many different types of data, whereas humans, they don't have the ability to see all the data sometimes. So that's one of the areas I think AI really has an opportunity to help. But then it's really just, there's that thing, the human emotion, right? The AI gives you, okay, we think these are the three most probable lines of sequence of things. And then the human has to sit there and say, What does my gut say is right, which one do I believe? Which one do I have faith in? And that really is kind of where the human component still gets to have its role. I think.
Bala Muthiah 41:11
True,No, 100% right. Again, coming back to who are we doing this far, and that person has to be there. And, yeah, predictions are comprehension to like, you talked about email summary. I really wish we put this all into, like, definitely the EU law right end user license agreement. I really want to summarize that to know, what am I getting into right now, we just go click Accept, right except without even reading anything like highlighting those things like, Oh, this is a danger, potentially. You know what? You're missing out probably giving up your data, giving up your rights to protect yourself, again. So those kind of little things, like this is one of the exciting stuff, where AI will unlock new opportunities and new spaces to go any technology, not necessarily like if you take car the same right back in the day, car is going to take you from point A to point B. Then it became an entertainment system. You listen to music. You have great quality speakers inside. Now you can have large screens inside. So it's getting more and more like we didn't stop after having just the basic car. So similarly, overall life quality will increase. But again, we should not also be blinded or like, ignore the fact that it's leaving a lot of people aside. Right? We should, as leaders and people who are aware of these things, to make sure everyone is coming along and not keep increasing the digital divide as we go. So that's like the other whole different side.
Dave Erickson 42:40
Well, that kind of brings us to another aspect of AI, and one that you seem to be pretty passionate about, and that is, How do humans grow? How do they become better and more valuable using AI or AI skills? How can, let's just say a developer use AI to upskill. What would your recommendations for developers who are looking to upscale?
Bala Muthiah 43:11
Yeah, no, good, good question. So it has always you mentioned about laziness, right? Like, that's the first thing. And then one of the things it also comes down to, is time. The notion of busyness, like we think we don't have time, and we think we have time, and both are wrong, right? You don't have the time you think you have, and you really have time that you think you don't have. So understanding that, like now in any any kind of thing, I will come to developers, but any person, individual, like, if you are exposed to technology with AI, advent of AI, you are going to get some time back. Like, how do you put that into proper use? It's up to, in your hands, right? What you were able to pursue as goal could now be done in a shorter time. So now you have more goals to go after, to build more greater things, or go do experience great things. Coming back to developer, it's the same. The reason Stanford study says with they did with, I don't know the number, but I think they did with 120,000 developers the survey, the overall productivity gain still comes somewhere around 10 maximum 15% that's all. So now that looks small compared to the promises that these big companies are making about replacing the whole workforce. But if you just look at 10 to 15% that's big, right? That is pretty much, very, very big. If you take 20% let's say ambitiously, like a week or 20% in a week, that's like one day. Imagine you only have four day work on one day you don't have to. So what can you do to your life? What can you do to how you do your work? Everything will change you more opportunities. Creating more values is going to be in terms of upskilling you. Every time you get back, every minute you get back, put that into upskilling yourself so you, you can achieve the maximum efficiency so it could still be at 20% that's good enough. Then you have more time back and have more passionate goals, like the whole work and everything has evolved, like back in the days before the five day week, five day weeks. Like people work six days a week. Now we are in a five day, thinking about four days, things are going to keep changing. And what do you want to do to the society? What do you want to do and give others and make the whole surrounding better? You can't just have your house as the best house in the whole neighborhood. And when you step out and the road is broken, there is everything else is messy. So you can't just have a beautiful house if your neighborhood is not good, so you need to go to the community and make it better. So think on using the time that you gain to upskill and then create more value for the broader good, not just for your own self.
Dave Erickson 46:00
You know, the education system is still somewhat intact, although there's questions about what direction it's actually going and what value it has. And I have a young daughter who's thinking about going into engineering, and she's trying to figure out what her education path would be. Education is changing because, and a lot of it's from Ai. People are trying to learn from AI and other platforms like YouTube and all kinds of other things. As a person who is going to college or getting out of college, what advice would you have for them now as to what they should be focusing on if they wanted to take an engineering path?
Bala Muthiah 46:47
I would say, go for it. That is the most important job. We need builders for the world. So definitely pursue and also the, I mentioned this little earlier, the whole thing about engineering is not the end solution, the whole process of building that enjoyment, I don't think that excitement will come from anything else. The sense of satisfaction you get during the building process. It's bigger than the end product itself. So keep that in mind. Like 10 years ago, it was something today, it's AI. 10 years from now, it could be something else, but still the problem, the solution, and the process of solving them, it's gonna stay that is the engineering DNA. And think about, how do you keep that alive? How do you nurture it? How do you nourish that as you grow? So, pursue the path, but do not lose sight by the tools that will give you short term gains. Think about more long term, and you want to enjoy the process of doing it. So things are probably what was a problem now, or like was a couple years ago, could be easily solved today. But think about the future. What you want to build? How do you want to engineer the future? That's a thing to go and pursue, and not just get bogged on by little things.
Botond Seres 48:06
So Bala, you were just mentioning the future. And regarding the future, I like to ask all of our guests, is something about the future? So in your case, I would like to ask, in your opinion, what's the future of AI?
Bala Muthiah 48:23
In one word, I would say it's exciting, and a lot of things that humanity is going to benefit from. So you talked about ability to unfold and fold proteins, what it can unlock, like, especially healthcare. I'm very, very eager, curious, excited about what it can really unlock, make our quality of life much better. Number one, right. Number two, I would say, AI will bridge the ga,. Uh, again, leaders are responsible for making sure the divide, that divide is closing like there's not much people are left out. So AI will bring more people closer, more people have access to systems which they would not have had before. So people are going to be well informed, well, I would say, organized in every form of life that they're having. So that's going to be exciting. So to me, one word is exciting. It's going to be great, because the quality of life, and the value we are going to create with those lives will be much, much bigger.
Dave Erickson 49:25
So Bala, maybe you can take a minute and talk a little bit about what you're currently doing and what kind of initiatives that are exciting and that you're supporting,
Bala Muthiah 49:37
Yeah, totally outside of my day job. What I think a lot about, what we're talking about, right, like the digital divide, and how do we not leave people behind every time a new tech comes, many people get closer, but many people are left out. And AI is one of the things where it will have not linear, but exponential effects in these areas. So my passion, or what I want to continue to invest in to do is, how do we make sure the society is not leaving behind a small group or a bigger group, right? And one it is only catering to one small subset. So that's one I'm part of various groups and circles where we actively go pursue founders outside of the normal stage, like outside of Silicon Valley. I go into, I'm working with couple startups in Africa, one in India, to get them exposed to everything we have and how we see people who come to San Francisco, where I live in, they see the autonomous cars way more than they like get mind blown, right? You don't see that every day, but when you live in one part of the world, it's part of your life. You see cars without drivers driving around as much as you read it on paper. Nothing beats it to be seeing their five cars crossing you, and three of them are not having drivers. So my goal, or the value I'm seeking is, how do I ensure more people are joining the force and not left behind? And what can we do to make it, it is accessible, but how do we make it more sustained? How do we help them? So my volunteering activities or mentoring are all on that front, primarily number two, of course, awareness, right? I'm on a lot of conversations with folks like you trying to bring awareness to people. So if someone is walking away from a podcast like this, like listening to it, they should go back with something that they can go do that's going to make their lives better. So touching people's life. Making things better is very, very important for all of us to do like we don't want to live in a place where the room is good but the whole house is messy, so let's make the house better. Let's make the community better.
Dave Erickson 51:53
Is there a specific initiative that you are currently supporting that, that will help people who want to move forward and not be left behind?
Bala Muthiah 52:04
Totally. Yeah, I have a mentor circle, a group of people who we are mentoring founders, mainly founders from underrepresented communities or even countries which doesn't have this access. So it's, think of it like a mentor circle. You can reach out to you can find me on my website and reach out. I'll connect you with the right people. So getting people connected to the right expert. So that's what we are doing, because we do not have all the answers, but we have networks. And who has networks so pretty much that will unlock what people are looking for, specifically, if you're an entrepreneur or if you're a aspiring student, like you said, like going into college or getting out of college and getting into workforce, reach out, and we will help you, give you broader perspectives with that you will be a better person and go achieve great things that you are set to.
Dave Erickson 52:59
Yeah, we'll put your LinkedIn information in the description, so people can contact you about that. I think that's a really important initiative, the AI quote revolution, the robotics revolution. It's going to really reshape societies. And societies globally are not equal, so it's going to have different impacts in different areas of the world, and I think it's important that people start communicating how they're going to deal with these impacts. Does your group do that? Or what do you feel about how people are going to be able to talk about these subjects before it becomes a disaster?
Bala Muthiah 53:39
Yes, so we, I have one of the aspirations that probably there are listeners who are interested, I would love to collaborate, which is to have some sort of Think Tank right where we have a broader responsibility. We don't have to do it all. We have people. It's, I always believe that there are people who are willing to give, and there are people who want to receive something, but there's a bridge that is missing. We have enough people to give in this world, and we have enough people who could benefit from that, but the bridge is missing. Like trying to be the bridge, so creating a think tank to influence policies, right. To have contacts or working in governments, and like helping people in power to draft such policies which can have a significant impact and shape future of a country or a population, right? So having a think tank that can go beyond just Hey, how do I go code and how do I go to this better? Again, coming back to human values, like, if I'll be more happy to say, Oh, I touched people's lives" than saying, Oh, I'm an engineer and I build great things.
Dave Erickson 54:45
Bala, thank you so much for being on our podcast and discussing a human approach to AI development and upskilling.
Botond Seres 54:52
Well, we are at the end of the episode today, but before we go, we want you to think about this important question.
Dave Erickson 54:59
How are you going to integrate AI into your products?
Botond Seres 55:03
For our listeners, please subscribe and click Notifications to join us for our next ScreamingBox technology and business rundown podcast, and until then, improve yourself and your products with AI.
Dave Erickson 55:16
Thank you very much for taking this journey with us. Join us for our next exciting exploration of technology and business in the first week of every month. Please help us by subscribing, liking and following us on whichever platform you're listening to or watching us on. We hope you enjoyed this podcast, and please let us know any subjects or topics you would like us to discuss in our next podcast by leaving a message for us in the comment sections or sending us a Twitter DM till next month, please stay happy and healthy.
Creators and Guests
