Entry #1: Beginning a regular newsletter starting with this one on AI and education
Why I should not find any other excuse not to write down my thoughts on things I read.
Shall we start with a topic on AI?
Here’s a recent topic that made me pay attention. But before I jump into it, I consider myself a voracious reader of news on journalism, media, marketing, entrepreneurship, sports, writing, books, technology, innovations, science fiction, history, sociology, psychology, and business, to name a few. Then, I snack on short videos on Reels, TikTok, and Instagram (yes, it now has a lot of video content) to relax.
For this #1 edition, I will dive into AI but I might also break the monotony by inserting a “must-read” — or at least something that made me read longer than 3 minutes or less.
Okay, let’s begin.
AI is and has been the hottest topic since ChatGPT was launched worldwide. Generative AI has threatened a lot of work: from copywriting, journalism, and graphic design, to any creative work that only requires a specific “prompting” — in short descriptive words, and viola, it generates an output in the form of an article, a poem, a graphic design, and a nice caption to an advertising. Generative AI works this way: using powerful computing, it is like a search engine but on steroids. Once you finish typing a prompt, say, “Show me a black and white photo of a man standing in the middle of nowhere and contemplating,” an image will be generated.
Guess what? The image below is the result of the prompt that I just typed and Substack’s generative AI, generated.
That’s the magic of generative AI. Despite my having poor graphic design skills, I managed to produce one in a few seconds, just by merely telling this computer to create one. With those words, I told the AI to “search” through millions of photos that contained metadata (data that comes with your photos) on the following words:
man
black and white
photo
man contemplating
middle
nowhere
A regular search on Google would produce a set of links to stock images which are most likely available for a fee. Generative AI cuts to the chase, and gives you what you want. That’s mind-blowing. A Google search gives you links to more information. Not too impressive these days, but it is the standard way of doing online research.
The applications of generative AI are endless but scary. It also threatens things that I do now like writing this newsletter. Here’s another example, this time using a prompt to Co-Pilot, Microsoft’s version of a generative AI. I will ask it to generate a 150-word explanation of how generative AI works in the language of a 10-year-old boy.
The result:
Sure, imagine you’re playing with a big box of Lego blocks. Now, think of generative AI as an invisible friend who also loves Legos. You show your friend a cool spaceship you built. Your friend looks at it, learns from it, and then builds another spaceship on their own. It might look similar to yours, but it’s not the same. That’s because your friend is using their own “imagination” to create something new. In the real world, generative AI is like that friend. It looks at lots of information (like pictures, words, or music), learns from it, and then creates something new. It’s not copying; it’s generating new stuff based on what it learned. Just like how you can build different things with the same Legos, generative AI can create different outputs from the same information. Cool, right? 😊
Not bad, right? Will this impress your teacher who asked you to submit a short response to an online discussion similar to this topic? Yes and no. Yes, because the explanation is good. No, because your teacher who has seen and read what you’ve written before is now skeptical about the big improvement in your writing.
My Take
I teach communications, journalism, and publishing at a local university. When asked by fellow part-time faculty how I handled generative AI in class, I told them that I’ve added a line in my “house rules” and syllabus that discourages the use of AI. In a few words, I told students that surrendering their work to AI is “lazy work.” I quickly followed it with, “I will know if you are using AI.”
Honestly, I won’t. If I do catch them doing it, there will be some serious talk. The local university where I teach has yet to come up with clear guidelines on AI use in learning and education. Without guidelines, AI use in education is open to debate.
So for now, I’m trusting my students to show intellectual honesty, hoping they don’t succumb to lazy work.
Must-Reads
Other schools have produced their guidelines like this policy paper on AI, which discusses the “ethical, social, and legal implications” of AI in education. As a policy paper, it is a long read. TL;DR version:
This paper talks about AI in higher education and its implications, opportunities, and challenges for students, teachers, and institutions.
It also jumps into the future of AI in higher education, speculating on scenarios where AI can replace or assist teachers, personalizing learning (this I like), helping facilitate assessment, and transforming the workforce and governance of institutions.
(That bulleted version is a result of another prompt on Co-pilot, where I asked it to summarize the page containing the policy paper. Neat, huh?)
Another university also underscored that AI should be used responsibly and that students adhere to academic standards of integrity. The student guideline discourages students from using AI to plagiarize, fabricate data, or cheat in any way (which includes using AI to write your contribution to online discussions in class).
Takeaways
There is another good guideline on AI, this time from the Associated Press, which has been my source for style guides in writing, journalism, and now AI.
AP has come up with “Standards around generative AI." A quick read gives several takeaways that can help guide you when dealing with AI in education and learning.
You need to agree on a standard guideline on generative AI.
AI will not replace journalists. So that applies to teachers. AI won’t replace teachers.
AI is an “unvetted source.” This means, don’t trust that AI will provide you with the most accurate answer to your questions. You will always need to verify, to check on the source, and the context as well.
AI should not be used to distort audio, video, or photos. It should not be used to distort reality. What it generates are “artful” images of what it learns from millions of data. Here’s a fun fact: AI cannot generate a good image of a human hand. Try it.
AI is now used to create misinformation, disinformation, and junk content on the Internet. So if you happen to stumble on third-party content online, verify and make sure it’s not done by AI. In short, if your spider senses are on high alert, any content you see online should be tested for accuracy, fairness, or completeness. Take precaution.