What's new
Carbonite

South Africa's Top Online Tech Classifieds!
Register a free account today to become a member! (No Under 18's)
Home of C.U.D.

How are you guys dealing with non-technical managers using AI?

Cannot wait for you code monkeys to be replaced by Grok.
Believe it or not, me too. It would be awesome to spit out some awesome software with a few prompts. Unfortunately it seems the Chinese are 5-10 years away while the Americans are about 20 years away from that. I'll be retired by then
 
Believe it or not, me too. It would be awesome to spit out some awesome software with a few prompts. Unfortunately it seems the Chinese are 5-10 years away while the Americans are about 20 years away from that. I'll be retired by then
Don’t worry man, we will all get fucked at some point whether it’s now or later. On the upside in you can still play with the AI overlord during retirement.
 
@masterwrong show this to your boss


Seems like you are being headhunted already :) We should still have a position open, full WFH
 
@masterwrong show this to your boss


Seems like you are being headhunted already :) We should still have a position open, full WFH
Cool popped you a DM.

Seems like the CEO burned thousands of rands on sonnet to create that abomination that used a postgres DB connected to the react and a redis DB for the api? I had to explain why that is insane along with explaining the concept of memory management (which we don't do manually due to using typescript and python) and why it was a waste of time to think about unless we want to migrate our codebase to another language.
 
Cool popped you a DM.

Seems like the CEO burned thousands of rands on sonnet to create that abomination that used a postgres DB connected to the react and a redis DB for the api? I had to explain why that is insane along with explaining the concept of memory management (which we don't do manually due to using typescript and python) and why it was a waste of time to think about unless we want to migrate our codebase to another language.
Did he concede? What was the outcome of it all?
 
I deal with clients that think they can just use ChatGPT or the like to write custom sql based reports then dont understand why things don't run, or work at all.
Best to do is sit down and scope the actual requirement, like it's cool and all that he tried to write something (and it might even work in a sandbox), but best to get a proper scope and do a proper roll-out in accordance with the rest of your stack.
 
Did he concede? What was the outcome of it all?
Ok so some feedback on this ongoing saga.

After yesterday's 3 hour marathon meeting where I ripped every single line of code written by the ai to shreds (Stupid ai seems to have lost context in the middle of a function) it looks like someone coded it by just clicking the middle option on next word auto-complete. He used the ai generated transcript of our meeting (conveniently ignoring my recommendations to learn the basics and break the project up into tiny building blocks and get the ai to generate each section) to prompt ai to fix the project.

Yes, he ignored my feedback entirely and used it as a prompt for ai to fix the coding errors that ai created. The results? Apparently it ran unit tests, he doesn't have a venv environment or uvicorn or python or typescript or nvm or npm installed on his pc so not sure how it did that. And he now wants to deploy it for use because he wants to use it as a scope for the project we are going to build next. He can't even run the abomination locally but he wants us to create a docker file and run an instance of it on our servers.

At this point I'm not even sure why I should give anything but the bare minimum feedback. I spent 40 minutes yesterday explaining why python and typescript do not have memory management capabilities outside of code optimization because ai told him we can optimize the data caching.

I deal with clients that think they can just use ChatGPT or the like to write custom sql based reports then dont understand why things don't run, or work at all.
Best to do is sit down and scope the actual requirement, like it's cool and all that he tried to write something (and it might even work in a sandbox), but best to get a proper scope and do a proper roll-out in accordance with the rest of your stack.

He doesn't want us to know what it does because then we might build it and get it working faster than his precious ai can.
 
Ok so some feedback on this ongoing saga.

After yesterday's 3 hour marathon meeting where I ripped every single line of code written by the ai to shreds (Stupid ai seems to have lost context in the middle of a function) it looks like someone coded it by just clicking the middle option on next word auto-complete. He used the ai generated transcript of our meeting (conveniently ignoring my recommendations to learn the basics and break the project up into tiny building blocks and get the ai to generate each section) to prompt ai to fix the project.

Yes, he ignored my feedback entirely and used it as a prompt for ai to fix the coding errors that ai created. The results? Apparently it ran unit tests, he doesn't have a venv environment or uvicorn or python or typescript or nvm or npm installed on his pc so not sure how it did that. And he now wants to deploy it for use because he wants to use it as a scope for the project we are going to build next. He can't even run the abomination locally but he wants us to create a docker file and run an instance of it on our servers.

At this point I'm not even sure why I should give anything but the bare minimum feedback. I spent 40 minutes yesterday explaining why python and typescript do not have memory management capabilities outside of code optimization because ai told him we can optimize the data caching.



He doesn't want us to know what it does because then we might build it and get it working faster than his precious ai can.
Time to leave...

If he doubles down it means he doesn't learn and only believes his own opinions.

If this solution doesn't work, who do you think will get the blame?
 
Time to leave...

If he doubles down it means he doesn't learn and only believes his own opinions.

If this solution doesn't work, who do you think will get the blame?
Yip I'm actively searching now. Not the passive update your CV and linkedin and reply to recruiters who send you messages.
 
I spent 40 minutes yesterday explaining why python and typescript do not have memory management capabilities
Something that I wondered about just this morning when I heard about the VAT rate only going up by 0.5% I am pretty sure somewhere out there in the wild some prompt engineer has the VAT rate as a int...
 
Something that I wondered about just this morning when I heard about the VAT rate only going up by 0.5% I am pretty sure somewhere out there in the wild some prompt engineer has the VAT rate as a int...
We had a guy make this mistake when VAT was increased from 14 to 15% and he wrote a script to update client databases... suddenly all their totals were rounded to the nearest rand due to the 15 being passed as an integer instead of numeric/float value... took us 3 months to realise and undo that fuckup.
 
Something that I wondered about just this morning when I heard about the VAT rate only going up by 0.5% I am pretty sure somewhere out there in the wild some prompt engineer has the VAT rate as a int...
Hahahahaha just thinking about how his entire prompt system is going to explode now makes me happy.
 
Hahahahaha just thinking about how his entire prompt system is going to explode now makes me happy.
It will blow up in the most fantastic way when some random edge case that the AI didn't think about happens and the prompter will just have surprised Pikachu face because it was working the whole time! (And of course the prompt engineer will have zero ability to debug and fix the issue himself)
 
It will blow up in the most fantastic way when some random edge case that the AI didn't think about happens and the prompter will just have surprised Pikachu face because it was working the whole time! (And of course the prompt engineer will have zero ability to debug and fix the issue himself)
I spent 2 years learning coding and the fundamentals of software engineering then did a year internship to learn how to develop production level code followed by 3 years of professional experience writing and debugging code. Getting undermined by someone with Dunning Kruger and AI makes my blood boil!

Imagine having to explain why the ai was lying to you because it claimed you can do low level control tasks with Python.
 
Just a bit of an update. After spending 3 weeks not being able to produce a simple product using AI my boss has decided that because we are developers we would be able to produce products quicker with AI (We already use AI in our work flows).

I called it when he started "Seeing the potential of AI" in January. Basically he wants us to produce new features in a single day rather than the already extreme pace of in a couple of weeks.

In other news I did a few TestGorillas over the weekend. Hopefully that goes well.
 
I may have missed it earlier if it got mentioned, recently found out on the reddit webdev section that there is actually a term for people with no programming background/experience who use ai for programming, and it's called "vibe coding" coined by an openai exec.
 
I may have missed it earlier if it got mentioned, recently found out on the reddit webdev section that there is actually a term for people with no programming background/experience who use ai for programming, and it's called "vibe coding" coined by an openai exec.
Well at least I know that my career isn't in trouble. I'll be fixing and scaling all these vibe coded monstrosities well into my 70s.
 
I've been reviewing the code for the past 2 hours and found 3 main.py files and 2 .env files.

The index.JS file and the main main.py file define two completely separate DBs.

Yeah, anyone know of anyone looking for a full stack developer with 4 years of experience. I can dev in Typescript, Python, Java, Javascript (node.js and express as well), Angular, React, HTML and CSS.
Based in Cape Town
 
I may have missed it earlier if it got mentioned, recently found out on the reddit webdev section that there is actually a term for people with no programming background/experience who use ai for programming, and it's called "vibe coding" coined by an openai exec.oh
Oh so that's what it is called...I work with a guy claiming he can easily be a Dev with "HOW AMAZING" ai is...He works in sales🙃
 
Oh so that's what it is called...I work with a guy claiming he can easily be a Dev with "HOW AMAZING" ai is...He works in sales🙃
Man those people are going to make me rich in a years time. "Why is my system broken help!!!!"
 
I've spent the last 5/6 years in a startup focusing on chatbot development. Before that, a decade and some change doing enterprise Java development. The advances seen in LLM/AI space over the last 18-24 months are honestly phenomenal!

I myself am continually shocked when I dust off hello-worlds from 2023 for specific libraries and then note how far that platform has come since then.

I recall a specific discussion with a client around 2 years back where we spoke about introducing GPT into the existing bot ecosystem but ended up agreeing that the costs were prohibitive. That's not the case today though as prices have come down drastically.

Sure, building a costing model for a solution that utilizes a cloud-hosted LLM (OpenAI for e.g.) and having to thumb-suck token volumes to ensure you don't undercharge is still super frustrating. But at least the model costs are getting cheaper.

I love software development and I loath it when people think you can simply generate an entire point-of-sale or whatever system overnight using an LLM cos they saw a thingie on tikkop about it! Software development done right is an art form. No, I'm not "gatekeeping" the field; I'm stating that there's a lot more to developing software than just writing the code; or in this case, having the LLM write it for you!

Things like source & version control, configuration management, modularization, dependency management are not things that the vibe-coder knows about. And let's not even start talking about building and deployment of said code!

The average non-technical prompt hacker doesn't understand all of these little nuances and therein lies the rub!

It sounds to me that you're relatively new to the development world and you're trying to argue the case from a technical manner.
You're going to lose that battle 10 times out of 10.

Your boss doesn't actually care about what you're telling him about unit testing, conformance to the current tech stack etc. All he's hearing are "excuses". From where he's sitting, it seems to him that "innovation" isn't happening and things are taking too long to happen.
You need to perhaps take a step back and look at your team's development processes a bit more objectively.
Are there a lot of processes for processes sake? What's your turn-around time like on releasing features, bug fixes etc?

You need to find a way to work with him instead of against him. You need to show him that you believe in his vision of using AI and find ways to integrate into your workflow in a manner that is
1) relevant to what you're doing
2) sustainable for the team to keep using
3) valuable to the organization

We must understand that we're probably close to the top of the Generative AI hype cycle; everyone and his dog is doing something with GenAI. Your boss is likely feeling pressure from his peer group; market competitors to get onto the train.

So perhaps what you should do is have a chat and explain to him that in order for the generated code to be usable, it needs to be compatible with the current platform. Therefore, you need him to explain the feature to you and then you'll develop a prompt that takes into consideration your current environment, rules etc and create the artifact that is usable.

LLMs too are a case of garbage-in-garbage-out. Craft yourself a nice handy prompt that specifies things like versions of libraries/dependencies/frameworks, variable naming styles, logging standards, modularization, configuration management.
Then use that prompt to generate his feature, or better yet, arm him with that base prompt for his future endeavours.

He's not going to stop poking around simply cos you told him the code is kak. So, instead of letting him wander, steer him in the direction that works for you.

Step 1 should be to make sure he's using Claude for code generation :)

Anyway, hope this great wall of text helps you reframe your mind and approach towards this.
 
Thank you for taking the time to write a wall of text on the matter.

I have tried to show him how the dev team uses AI in our workflow to speed up development but it is never quick enough for him. It seems like he wants major features shipped in an afternoon with only a vague description of said feature to build on. He also retroactively wants to add features to current builds that don't fit into the design patterns of those features.

If I can explain what his approach to development is as a metaphor: He asks for a chocolate cake at the beginning then when the cake is in the oven he asks us to change it into a car. Once the car is in the wind tunnels for aerodynamic testing he then changes his mind and demands that he actually wants a 4 bedroom townhouse. Once the townhouse is complete and he is inspecting it he walks up to a wall and takes a bite and asks why it isn't made of chocolate and can't drive.

On top of that he demands daily 2 hour scrum meetings and further 2-4 hour meetings the closer we get to a deadline. It makes shipping features very frustrating as a dev and usually ends up with 6-8 hours of unpaid overtime a day to meet deadlines. In his head AI is a way to keep all of the completely inefficient processes, the constantly changing scopes and complete revisions and ship features in an afternoon.

The dev team (he recently laid off 60% of us) has developed a comprehensive set of prompts in our documentation and we took the time to explain them all to him and why to use them along with the flaws in the way he was designing his feature using AI (namely the API being connected to a DB that is completely separate to the DB used in the frontend) and all he did was take the AI generated minutes of that meeting and use them as a prompt to fix the feature. I genuinely thought that he was interested in learning a little bit of what goes into software dev but he seems to want to "no-code" everything without knowing a single thing about how data moves through the system or what an API is. It is getting incredibly frustrating because the more we work with him trying to help him live out this power trip he is on, the more he goes in the complete opposite direction and that causes more bugs and more broken code and more unpaid overtime.

It is honestly burning me out, I love software development but when I need to work 400 hours a week (16 hour days during the week and 10 hour days every weekend). I haven't done a gitpush for a single side project in months and this is all because my boss seems to think that with AI software dev is suddenly something you can do in an afternoon at most, if you are going slow.
 
Jeepers, that's additional insight actually changes the picture.

I reckon if that's the trajectory that the business is heading on and you don't see a way of getting him to think rationally (like a software dev) then it's probably time to pop smoke and get the heck out of dodge!
 
How is your experience with Co-Pilot? I've mostly only used it on the app on my phone for FAFO 🤣
I am one of the few folk with a full licence. It's not massively amazing but being built into the Microsoft suite there aren't the same safety issues of, for example, letting any random AI into meetings. I use if for meeting notes, action points etc and have also built basic chat bots that utilise other Microsoft products which can be useful.

For generative AI though I get better answers from other software.
 
I am one of the few folk with a full licence. It's not massively amazing but being built into the Microsoft suite there aren't the same safety issues of, for example, letting any random AI into meetings. I use if for meeting notes, action points etc and have also built basic chat bots that utilise other Microsoft products which can be useful.

For generative AI though I get better answers from other software.
Yea that is also one of my concerns. Same reason why deepseek is blocked fully at work.
 
Back
Top Bottom