Our union recently had our biannual ethics training course. These hours are part of our PE licensing requirements.
During the class, there was a poll regarding the ethical use of AI in engineering. ~80% of the membership considered ANY use of AI in the development of plans and specs to be unethical.
The licensing board had other ideas, comparing it to the advent of CADD.
Dumb.
I was meeting with one of our engineering professors last semester and when he found out my educational background was in ML/AI, he started talking about the dilemma they feel they're in. i.e., they understand what a great tool AI
can be, and they know that between our university's AI policies and just practical reality, they can't stop students from using it. otoh, there are many things which they're adamant that students need to learn to truly do on their own, foundational concepts that nothing else should help them with, things that if they can't pass the tests/assignments on their own, they should wash out of the program. There's just not a great system to enforce such a balancing act, nor is it obvious to them where the line should be in that balance.
According to him, as a collective group, our various engineering depts. professors are not optimistic at the moment, and struggling for how to proceed in this new era of education.
The last three conferences that have been put on that my department has anything to do with deals with a lot of that. We don't do this, but we frequently work in conjunction with a faculty-success department whose whole thing is to come up with ways to navigate this new era. I genuinely applaud the work I see them doing.
I also think they're all fooling themselves. I've sat through their sessions, listened to their speeches, and engaged their presentations. I think they confidence they have in 1) the methods and safeguards they suggest, and 2) the outcome of such cases should their methods even be viable, is, frankly, unfounded at best. Me, I think nobody has any idea what's coming, not just in the job market, but more relevant to this discussion, in the educational outcomes for engineering (and other) students, and, maybe most soberingly, in the projects in the real world these students will go on to work on. I think the university think-tank has an overly optimistic view of the human condition--which is where I see flaws in a lot of their suggestions--and also in the eventual and overall effects of AI on true human learning.
At the risk of sounding full of myself, I'm also pretty sure I understand AI better than at least 90% of the people tasked with navigating and implementing it into policy. And I'm 100% sure that I have a more realistic understanding of what students are going to do with it, given the chance.
These are truly interesting times.