69. Puranam Panish - Why organisations will need human centric AI to survive
In this episode, i am joined by Professor Phanish Puranam from INSEAD, one of the world’s leading thinkers on strategy and organizational design, and this year’s recipient of the prestigious Oscar Morgenstern Medal from the University of Vienna.Our conversation explores the deep and often hidden ways in which technology and Artificial Intelligence in particular, reshapes organizations—how we work, how we collaborate, and how power is distributed inside organizations. In the first part of the interview, we dive into one of the most fundamental questions in organizational science: centralization versus decentralization. How do technologies, especially communication technologies, shift the balance between empowering workers with autonomy or giving managers unprecedented tools for monitoring and control.In the second half of our discussion, we turn to generative AI and its impact on how employees skills and expertise are developed within organizations. While GenAI promises efficiency and new forms of collaboration, it also carries the risk of “cognitive offloading”—outsourcing thinking to machines in ways that could erode human competence over time. We consider the tension between treating AI as a tool that enhances human capability, like the abacus, versus one that risks hollowing out expertise, like the calculator. And we confront the very important question of what organizations risk if they replace too many workers with AI agents, resulting in a future where every competitor uses the same AI. In such a world, what’s left to distinguish one company from another?Phanish makes a compelling case that companies must continue to invest in human-centric organisations—not only because people bring autonomy, competence, and connection, but also because these qualities will be the true sources of competitive advantage in an AI-saturated marketplace.
--------
1:10:50
--------
1:10:50
68. Markus Tretzmüller - Cortecs - Europäische LLM Infrastruktur Unabhängigkeit
Seit Anfang des Jahres gibt es in Europa einen starkes politisches Verlangen, sich von den USA unabhängig zu machen. Dies betrifft nicht nur die aktuelle militärische Abhängigkeit, sondern auch die Abhängigkeit von US tech Unternehmen. Besonders interessant für den AAIP ist natürlich die starke abhängigkeit Europas von Amerikanischen und Chinesischen KI Modellen und der Computer Infrastruktur um diese Modelle zu nützen.Heute auf dem Podcast spreche ich mit Markus Tretzmüller, der Mitbegründer von Cortecs. Einem Österreichischen Unternehmen das es sich zum Ziel gesetzt hat, mittels eines Sky Computing Ansatzes, eine Routing Lösung zu entwickeln die es Europäischen Unternehmen ermöglicht lokale Cloud Anbieter für KI Anwendungen zu nützen. Diese ermöglicht es KI Lösungen zu entwickeln, die im Europäischen Rechtsraum operieren ohne auf die Vorteile von Hyperscalern wie Kosteneffizienz und Ausfallsicherheit verzichten zu müssen.Im Interview erzählt Markus warum es nicht reicht auf Europäische Neiderlassungen von US Unternehmen zu setzen um Unabhängigkeit und Datensicherheit zu gewährleisten, und welche Vorteile eine routing Lösung wie Cortecs bringen kann.Viel spass und spannendes zuhören.## Referenzen- Cortecs: https://cortecs.ai/ - Building Your Sovereign AI Future- Sky computing: https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s02-stoica.pdf- RouteLLM: https://arxiv.org/abs/2406.18665- FrugalGPT - https://arxiv.org/abs/2305.05176
--------
42:39
--------
42:39
67. Mathias Neumayer and Dima Rubanov - Lora a child friendly AI
## SummaryLarge Language Models have many strengths and the frontier of what is possible and what they can be used for, is pushed back on the daily bases. One area in which current LLM's need to improve is how they communicate with children. Todays guests, Mathias Neumayer and Dima Rubanov are here to do exactly that, with their newest product LORA - a child friendly AI.Through they existing product Oscar stories, they identified issues with age appropriate language and gender bias in current LLMS's. With Lora, they are building their own AI friendly solution by fine tuning state of the art LLMs with expert curated data that ensures Lora is generating the appropriate language for children of a specific age.On the show they will describe how they are building Lora and what they plan to do with it.### References- https://oscarstories.com/- GenBit Score: [https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf)- Counterfactual Reasoning for Bias Evaluation: [https://arxiv.org/abs/2302.08204](https://arxiv.org/abs/2302.08204)
--------
53:10
--------
53:10
66. Taylor Peer - Beat Shaper - A music producers AI Copilot
Today on the show I have the pleasure to talk to returning guest, Taylor Peer one of the co-founders of the startup, behind Beat Shaper.Taylor will explain how they are following an Bottom-up approach to create electronic music, giving producers, fine grained control to create individual music instruments and beat patterns. For this, Beat Shaper is combining Variational Auto-encoders and Transformers. The VAE is used to create high dimensional embeddings that represent the users preferences that are used to guide the autoregressive generation process of the Transformer. The token sequence generated with the transformer is a custom developed symbolic music notation that can be decoded into individual instruments. We discuss in detail the system architecture and training process. Taylor is explaining in depth how they build such a system, and how they have been creating their own synthetic training dataset that contains music in symbolic notation that enables the fine grained control over the generated music.I hope you like this episode, and find it useful.### Referencesbeatshaper.ai - Beatshaper an Copilot for Musics Producershttps://openai.com/index/musenet/ - OpenAI MuseNetPlease create a funny looking comic image, showing a panda with glasses that is very busy creating music on a computer.
--------
52:20
--------
52:20
65. Daniel Kondor - CSH - The long term impact of AI on society
Guest in this episode is the Computational Social Scientist Daniel Kondor, Postdoc at the Complexity Science Hub in Vienna.Daniel is talking about research methods that make it possible to study the impact of various factors like technological development on societies; and in particular their rise or fall, over long periods of time. He explain how modern tools from computational social science, like agent based modelling can be used to study past and future social groups. We talk about his most recent publication that takes a complex systems perspective on the risk AI poses for society and provided suggestions on how to manage such risks through public discourse and involvement of affected competency groups.## References- Waring TM, Wood ZT, Szathmáry E. 2023 Characteristic processes of human evolution caused the Anthropocene and may obstruct its global solutions. Phil. Trans. R. Soc. B 379: 20220259. https://doi.org/10.1098/rstb.2022.0259- Kondor D, Hafez V, Shankar S, Wazir R, Karimi F. 2024 Complex systems perspective in assessing risks in artificial intelligence. Phil. Trans. R. Soc. A 382: 20240109. https://doi.org/10.1098/rsta.2024.0109- https://seshat-db.com/
--------
1:04:50
--------
1:04:50
Flere Teknologi podcasts
Trendige Teknologi podcasts
Om Austrian Artificial Intelligence Podcast
Guest Interviews, discussing the possibilities and potential of AI in Austria.
Question or Suggestions, write to [email protected]