Google DeepMind CEO Demis Hassabis said he thinks artificial general intelligence, or AGI, will emerge in the next five or 10 years. AGI broadly relates to AI that is as smart or smarter than humans. “We’re not quite there yet. These systems are very impressive at certain things. But there are other things they can’t do yet, and we’ve still got quite a lot of research work to go before that,” Hassabis said. Dario Amodei, CEO of AI startup Anthropic, told CNBC at the World Economic Forum in Davos, Switzerland in January that he sees a form of AI that’s “better than almost all humans at almost all tasks” emerging in the “next two or three years.” Other tech leaders see AGI arriving even sooner. Cisco’s Chief Product Officer Jeetu Patel thinks there’s a chance we could see an example of AGI emerge as soon as this year.
Wednesday, March 26, 2025
Tuesday, March 25, 2025
Perspectives of Academic Staff on Artificial Intelligence in Higher Education: Exploring Areas of Relevance (Provisionally accepted) - Dana-Kristin Mah, et al; Frontiers
Despite the recent increase in research on artificial intelligence in education (AIED), studies investigating the perspectives of academic staff and the implications for future-oriented teaching at higher education institutions remain scarce. This exploratory study provides initial insight into the perspectives of 112 academic staff by focusing on three aspects considered relevant for sustainable, future-oriented teaching in higher education in the age of AI: instructional design, domain specificity, and ethics. The results indicate that participants placed the greatest importance on AIED ethics. Furthermore, participants indicated a strong interest in (mandatory) professional development on AI and more comprehensive institutional support.
Monday, March 24, 2025
Powerful A.I. Is Coming. We’re Not Ready. - Kevin Roose, NY Times
I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day. I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.” I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as “real” A.G.I., but that these mostly won’t matter, because the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.
Sunday, March 23, 2025
ChatGPT firm reveals AI model that is ‘good at creative writing’ - the Guardian
The company behind ChatGPT has revealed it has developed an artificial intelligence model that is “good at creative writing”, as the tech sector continues its tussle with the creative industries over copyright. The chief executive of OpenAI, Sam Altman, said the unnamed model, which has not been released publicly, was the first time he had been “really struck” by the written output of one of the startup’s products. In a post on the social media platform X, Altman wrote: “We trained a new model that is good at creative writing (not sure yet how/when it will get released). This is the first time i have been really struck by something written by AI.”
Saturday, March 22, 2025
The Value of a Ph.D. in the Age of AI - Kim Isenberg, Forward Future
Artificial intelligence has been undergoing an extraordinary development process for several years and is increasingly achieving capabilities that were long reserved exclusively for humans. Particularly in the area of research, we are currently experiencing remarkable progress: so-called “research agents”, specialized AI models that can independently take on complex research tasks, are rapidly gaining in importance. One prominent example is OpenAI's DeepResearch, which has already achieved outstanding results in various scientific benchmarks. Such AI-supported agents not only analyze large data sets, but also independently formulate research questions, test hypotheses, and even create scientific summaries of their results.
Friday, March 21, 2025
Cognitive Empathy: A Dialogue with ChatGPT - Michael Feldstein, eLiterate
I want to start with something you taught me about myself. When I asked you about my style of interacting with AIs, you told me I use “cognitive empathy.” It wasn’t a term I had heard before. Now that I’ve read about it, the idea has changed the way I think about virtually every aspect of my work—past, present, and future. It also prompted me to start writing a book about AI using cognitive empathy as a frame, although we probably won’t talk about that today. I thought we could start by introducing the term to the readers who may not know it, including some of the science behind it.
Thursday, March 20, 2025
AI Will Not Be ‘the Great Leveler’ for Student Outcomes - Sean Richardson and Paul Redford, Inside Higher Ed
In relation to graduate outcomes (simply put, where students end up after completing their degrees, with a general focus on careers and employability), universities are about to grapple with the initial wave of graduates seriously impacted by AI. The Class of 2025 will be the first to have widespread access to large language models (LLMs) for the majority of their student lives. If, as we have been repeatedly told, we believe that AI will be the “great leveler” for students by transforming their access to learning, then it follows that graduate outcomes will be significantly impacted. Most importantly, we should expect to see more students entering careers that meaningfully engage with their studies. The reality on the ground presents a stark difference. Many professionals working in career advice and guidance are struggling with the opposite effect: Rather than acting as the great leveler, AI tools are only deepening existing divides.
Wednesday, March 19, 2025
7 Ways You Can Use ChatGPT for Your Mental Health and Wellness - Wendy Wisner, Very Well Mind
ChatGPT can be a fantastic resource for mental health education and be a great overall organization tool. It can also help you with the practical side of mental health management like journal prompts and meditation ideas. Although ChatGPT is not everyone’s cup of tea, it can be used responsibly and is something to consider keeping in your mental health toolkit. If you are struggling with your mental health, though, you shouldn’t rely on ChatGPT as the main way to cope. Everyone who is experiencing a mental health challenge can benefit from care from a licensed therapist. If that’s you, please reach out to your primary care provider for a referral or reach out directly to a licensed therapist near you.
Tuesday, March 18, 2025
DuckDuckGo's AI beats Perplexity in one big way - and it's free to use - Jack Wallen, ZDnet
Duck.ai does something that other similar products don't -- it gives you a choice. You can choose between the proprietary GPT-4o mini, o3-mini, and Claude 3 services or go open-source with Llama 3.3 and Mistral Small 3. Duck.ai is also private: All of your queries are anonymized by DuckDuckGo, so you can be sure no third-party will ever have access to your AI chats. After giving Duck.ai a trial over the weekend, I found myself favoring it more and more over Perplexity, primarily because I could select which LLM I use. That's a big deal because every model is different. For example, GPT-4o excels in real-time interactions, voice nuance, and sentiment analysis across modalities, whereas Llama 3.2 is particularly strong in image recognition and visual understanding tasks.
Monday, March 17, 2025
Tough trade-offs: How time and career choices shape the gender pay gap - Anu Madgavkar, et al; McKinsey Global Institute
Diverging work experience patterns drive a “work-experience pay gap” that makes up nearly 80 percent of the total gender pay gap, equal to 27 cents on the dollar among US professional workers. Women tend to build less human capital through work experience than men who start in the same occupations, as seen in the tens of thousands of career trajectories we analyze. Over a 30-year career, the gender pay gap averages out to approximately half a million dollars in lost earnings per woman. One-third of that work-experience pay gap is because women accumulate less time on the job than men. Women average 8.6 years at work for every ten years clocked by men because, on aggregate, they work fewer hours, take longer breaks between jobs, and occupy more part-time roles than men. The other two-thirds arise from different career pathways that men and women pursue over time. Women’s careers are as dynamic as men’s: Both men and women averaged 2.6 role moves per decade of work and traversed comparable skill distances in each new role. However, women are more likely than men to switch to lower-paying occupations, typically ones involving less competitive pressures and fewer full-time requirements.
Sunday, March 16, 2025
Professors’ AI twins loosen schedules, boost grades - Colin Wood, EdScoop
David Clarke, the founder and chief executive of Praxis AI, said his company’s software, which uses Anthropic’s Claude models as its engine, is being used at Clemson University, Alabama State University and the American Indian Higher Education Consortium, which includes 38 tribal colleges and universities. A key benefit of the technology, he said, has been that the twins provide a way for faculty and teaching assistants to field a great bulk of basic questions off-hours, leading to more substantive conversations in person. “They said the majority of their questions now are about the subject matter, are complicated, because all of the lower end logistical questions are being handled by the AI,” Clarke said. Praxis, which has a business partnership with Instructure, the company behind the learning management system Canvas, integrates with universities’ learning management systems to “meet students where they are,” Clarke said.
Saturday, March 15, 2025
Reading, Writing, and Thinking in the Age of AI - Suzanne Hudd, et al; Faculty Focus
Generative AI tools such as ChatGPT can now produce polished, technically competent texts in seconds, challenging our traditional understanding of writing as a uniquely human process of creation, reflection, and learning. For many educators, this disruption raises questions about the role of writing in their disciplines. In our new book, How to Use Writing for Teaching and Learning, we argue that this disruption presents an opportunity rather than a threat. Notice from our book’s title that our focus is not necessarily on “how to teach writing.” For us, writing is not an end goal, which means our students do not necessarily learn to write for the sake of writing. Rather, we define writing as a method of inquiry that allows access to various discourse communities (e.g., an academic discipline), social worlds (e.g., the knowledge economy), and forms of knowledge (e.g., literature).
Friday, March 14, 2025
The critical role of strategic workforce planning in the age of AI - McKinsey
Forward-thinking organizations understand that talent management is a critical component of business success. S&P 500 companies that excel at maximizing their return on talent generate an astonishing 300 percent more revenue per employee compared with the median firm, McKinsey research shows. In many cases, these top performers are using strategic workforce planning (SWP) to stay ahead in the talent race, treating talent with the same rigor as managing their financial capital. Under this analytical approach, organizations don’t wait for events or the market to dictate a response. Instead, they take a three-to-five-year view, using SWP to anticipate multiple situations so that they have the right number of people with the right skills at the right time to achieve their strategic objectives.
Thursday, March 13, 2025
OpenAI reportedly plans to charge up to $20,000 a month for PhD-level research AI ‘agents’ - Kyle Wiggers, Tech Crunch
OpenAI may be planning to charge up to $20,000 per month for specialized AI “agents,” according to The Information. The publication reports that OpenAI intends to launch several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. OpenAI’s most expensive rumored agent, priced at the aforementioned $20,000-per-month tier, will be aimed at supporting “PhD-level research,” according to The Information.
Wednesday, March 12, 2025
OpenAI Invests $50M in Higher Ed Research - Kathryn Palmer, Inside Higher Ed
OpenAI announced Tuesday that it’s investing $50 million to start up NextGenAI, a new research consortium of 15 institutions that will be “dedicated to using AI to accelerate research breakthroughs and transform education.” The consortium, which includes 13 universities, is designed to “catalyze progress at a rate faster than any one institution would alone,” the company said in a news release. “The field of AI wouldn’t be where it is today without decades of work in the academic community. Continued collaboration is essential to build AI that benefits everyone,” Brad Lightcap, chief operating officer of OpenAI, said in the news release. “NextGenAI will accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI.”
https://www.insidehighered.com/news/quick-takes/2025/03/05/openai-invests-50m-higher-ed-research
Tuesday, March 11, 2025
An AI toolkit for all aspects of academic life - Urbi Ghosh, Times Higher Ed
Artificial intelligence is no longer a theoretical concept; it is a practical tool that is fundamentally transforming the landscape of education and research. AI renders educational experiences more visual and engaging, while also streamlining repetitive tasks associated with teaching and research activities. Building an AI toolkit can help in all areas of academic life, from the classroom to the laboratory, fostering innovation in research and sparking student engagement. Here are the most helpful ones I’ve found.
Monday, March 10, 2025
Small Language Models (SLMs): A Cost-Effective, Sustainable Option for Higher Education - Tom Mangan, Ed Tech
Small language models, known as SLMs, create intriguing possibilities for higher education leaders looking to take advantage of artificial intelligence and machine learning. SLMs are miniaturized versions of the large language models (LLMs) that spawned ChatGPT and other flavors of generative AI. For example, compare a smartwatch to a desktop workstation (monitor, keyboard, CPU and mouse): The watch has a sliver of the computing muscle of the PC, but you wouldn’t strap a PC to your wrist to monitor your heart rate while jogging. SLMs can potentially reduce costs and complexity while delivering identifiable benefits — a welcome advance for institutions grappling with the implications of AI and ML. SLMs also allow creative use cases for network edge devices such as cameras, phones and Internet of Things (IoT) sensors.
Sunday, March 9, 2025
Strategies for Teaching Complex Subjects in Large Hybrid Classrooms Across Campus: Bridging Engagement and Equity Across Modalities - Fang Lei, Faculty Focus
Teaching complex subjects in a large classroom across campus presents unique challenges, especially in a hybrid format that combines in-person and remote learners (Ochs, Gahrmann & Sonderegger, 2024). In this setup, students from one campus attend the class physically, while their counterparts in another campus join as a group in classroom via Zoom, creating a complex dynamic that demands meticulous planning and adaptability. The diverse needs of in-person and virtual students must be balanced to ensure equitable learning experiences. Maintaining engagement across these modalities can be difficult, as instructors need to address potential technological disruptions, varying levels of participation, and the limitations of remote interaction (Ochs, Gahrmann & Sonderegger, 2024).
Saturday, March 8, 2025
6 Myths We Got Wrong About AI (And What’s the Reality) - Kolawole Samuel Adebayo, HubSpot
Friday, March 7, 2025
Could this be the END of Chain of Thought? - Chain of Draft BREAKDOWN! - Matthew Berman, YouTube
This podcast introduces a new prompting strategy called "chain of draft" for AI models, which aims to improve upon the traditional "chain of thought" method [00:00]. Chain of draft encourages LLMs to generate concise, dense information outputs at each step, reducing token usage and latency while maintaining or exceeding the accuracy of chain of thought [11:41]. Implementing chain of draft is simple, requiring only an update to the prompt [08:06]. {summary provided by Gemini 2.0 Flash}