<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Prem G. Kumar</title>
    <description>Personal site of Prem G. Kumar</description>
    <link>https://prem-kumar.com/</link>
    <atom:link href="https://prem-kumar.com/feed.xml" rel="self" type="application/rss+xml" />
    
      <item>
        <title>A Year of Being an AI Startup Founder</title>
        <description>&lt;p&gt;In June 2024, I co-founded a startup and served as the CTO. Our mission was to bring LLM-based tech orgs up the maturity curve. We initially started in cybersecurity but pivoted our product to evaluations, fine-tuning, model comparisons, and other problems that turned out to be more pressing for our customers. Even though we raised pre-seed funding and won some customers, we made the hard decision to wind down operations in summer 2025.&lt;/p&gt;

&lt;p&gt;This blog post is a summation of my thoughts and learnings from the previous year and my predictions about the AI industry.&lt;/p&gt;

&lt;h2 id=&quot;tldr&quot;&gt;tl;dr&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;The AI bubble is deflating right now in summer 2025.&lt;/li&gt;
  &lt;li&gt;Consumer access of ChatGPT is at best net neutral, if not a net negative to society. We are unprepared to handle what it’s doing to us.&lt;/li&gt;
  &lt;li&gt;API access to LLMs is a net positive only for enterprises that use it with maturity and strong investment in evaluations, process, and cybersecurity. Even code auto-complete should be used with care, let alone coding agents and customer-facing products.&lt;/li&gt;
  &lt;li&gt;Creative fields have not figured out a good use for LLM and image generation technology, and maybe never will due to the fundamental nature of these models.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-ai-bubble-is-deflating&quot;&gt;The AI bubble is deflating&lt;/h2&gt;

&lt;h3 id=&quot;unit-economics&quot;&gt;Unit economics&lt;/h3&gt;

&lt;p&gt;It costs an LLM provider like OpenAI billions of dollars to train a new model. Each new model requires more compute, which means newer GPUs and bigger data centers. This costs a lot of money. In theory, they should be able to cover this cost with the profit on their consumer business and their API business. However, OpenAI CEO Sam Altman has recently admitted to only, at best, breaking even on every inference made. &lt;a href=&quot;https://www.wheresyoured.at/how-to-argue-with-an-ai-booster/#ultimate-booster-quip-the-cost-of-inference-is-coming-down-this-proves-that-things-are-getting-cheaper&quot;&gt;The unit economics is not adding up&lt;/a&gt; and is certainly not pointing to the massive valuations these LLM providers are getting from investors.&lt;/p&gt;

&lt;p&gt;Even companies like MS, Google, and Meta are funding their generative AI divisions using revenue from their other business units.&lt;/p&gt;

&lt;p&gt;Valuations are based on vibes. Public backlash as well as the CEOs themselves &lt;a href=&quot;https://arstechnica.com/information-technology/2025/08/sam-altman-calls-ai-a-bubble-while-seeking-500b-valuation-for-openai/&quot;&gt;pumping the breaks on their own hype&lt;/a&gt; is shifting the vibes. They’ve wisened up to the fact that they can no longer sustain the hype required to support their valuations. The bungled release of GPT-5 and the disappointment their users felt has been the tipping point. The vibes are undeniably shifting.&lt;/p&gt;

&lt;p&gt;I will concede that the use of generative AI is rising in various industries &lt;a href=&quot;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&quot;&gt;according to this March 2025 McKinsey report.&lt;/a&gt; But the report also says:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Yet gen AI’s reported effects on bottom-line impact are not yet material at the enterprise-wide level. More than 80 percent of respondents say their organizations aren’t seeing a tangible impact on enterprise-level EBIT from their use of gen AI. Organizations have been experimenting with gen AI tools. Use continues to surge, but from a value capture standpoint, these are still early days—few are experiencing meaningful bottom-line impacts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So this is a technology that’s been pushed on the workforce for 2 years and we’re just not seeing the benefits they’d predicted we would see. When should we call it a bust? 1 more year of this? 2 more years?&lt;/p&gt;

&lt;h3 id=&quot;public-sentiment&quot;&gt;Public sentiment&lt;/h3&gt;

&lt;p&gt;As more people experience ChatGPT, the more people over-hype and over-rely on LLMs to the point of being annoying. This leads to a larger number of people who come into contact with these bad actors, permanently and negatively affecting their viewpoint on consumer LLM technology.&lt;/p&gt;

&lt;p&gt;Let’s say you work with someone and you ask them for help on a topic. You message them on Slack asking them a question. Their 3-paragraph reply begins in perfect LLM tone: “Certainly, I can help you with that!” You get pissed off and feel betrayed because they couldn’t take the time to talk to you, and instead did the modern equivalent of &lt;a href=&quot;https://letmegooglethat.com/&quot;&gt;lmgtfy&lt;/a&gt;. You come away from this interaction associating this annoying coworker with LLMs. You vow never to touch ChatGPT and in fact you think it makes the world a worse place.&lt;/p&gt;

&lt;p&gt;I’ve heard this story from friends and family a couple dozen times in the last few months. Anecdata, yes, but it’s a repeating pattern that’s worth taking seriously if you think LLMs are the future of work.&lt;/p&gt;

&lt;h2 id=&quot;chatgpt-considered-harmful&quot;&gt;ChatGPT considered harmful&lt;/h2&gt;

&lt;p&gt;This next point is related to the previous one.&lt;/p&gt;

&lt;p&gt;The AI-hype in the news media is giving way to an increasing number of horror stories: people lured by chatbots to injury or death, psychosis caused by agreeable LLMs keeping vulnerable people trapped in negative thought loops, even a teenager committing suicide after an LLM explicitly encouraged him to.&lt;/p&gt;

&lt;p&gt;When an LLM is used in an enterprise context, guardrails against these cases are unimportant. If a software engineer integrates with the Anthropic API to extract phone numbers from PDFs, then there doesn’t need to be a bullet point in the system prompt telling the LLM to refrain from giving answers that may harm someone.&lt;/p&gt;

&lt;p&gt;I believe that the free tier of ChatGPT is eroding some of the basic functions of society. By introducing synthetic interactions into everyday life, the side-effects of real organic interactions get sanded away, leaving behind some strange hypernormal stimulus that our society and our brains are ill-equipped to handle.&lt;/p&gt;

&lt;p&gt;Let’s be real: LLMs are not substitutes for people. It’s not like autocomplete, it’s not like an intern, and it’s not like a therapist. It’s something else entirely. Most of us don’t have the mental model to be able to hold these ideas in tension every time we “talk” to ChatGPT. In fact, we aren’t actually talking to anything! See? We don’t even have the right words to describe what’s happening when a user interacts with an LLM. Let alone the capability to correctly slot it into the exact right gap in our lives.&lt;/p&gt;

&lt;h2 id=&quot;enterprise-use-cases&quot;&gt;Enterprise use cases&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Tech orgs are not prepared for the cybersecurity risk of LLMs. LLMs are fundamentally insecure pieces of technology both due to their stochasticity and the fact that &lt;a href=&quot;https://simonwillison.net/2025/Aug/25/agentic-browser-security/&quot;&gt;their structure prevents them from differentiating between trusted and untrusted input token streams.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Tech orgs need to continuously evaluate their LLM output quality in production, regardless of whether it’s internal tooling or customer-facing products. To neglect this is malpractice.&lt;/li&gt;
  &lt;li&gt;Integrating LLMs into a team is a change management issue because teams need processes in place to get the subject matter experts to be directly responsible for what their LLMs are doing. This involves regular manual evaluations of output samples and error analysis. Internal tooling (either built or bought) can reduce friction, but at the end of the day, subject matter experts must be incentivized to do the tedious work of finding issues and correcting them.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Diffusing the responsibility of the LLM into the organization at large leads to perverse incentives. LLMs become a way to do stupid things faster. If there is no individual responsible for an LLM’s outputs, it can diverge from intentions quickly, even as soon as it gets deployed.&lt;/p&gt;

&lt;p&gt;Quoting the &lt;a href=&quot;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&quot;&gt;March 2025 McKinsey report&lt;/a&gt; I linked above:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;So far, less than one-third of respondents report that their organizations are following most of the 12 adoption and scaling practices, with less than one in five saying their organizations are tracking KPIs for gen AI solutions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://web.archive.org/web/20160910002130/http://worrydream.com/refs/Brooks-NoSilverBullet.pdf&quot;&gt;There is no silver bullet&lt;/a&gt; but it’s really tempting for tech leaders to pretend like LLMs are one. Whether it’s copilots for writing code or customer service chatbots, it’s hard to resist simply dropping in an LLM-based solution and hoping for the best. What seem like common sense measures, because they may be a little challenging to implement, are often ignored for the sake of product velocity. I believe this will come back to bite tech orgs in the near future.&lt;/p&gt;

&lt;h2 id=&quot;using-llms-for-creation&quot;&gt;Using LLMs for creation&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;AI is fake and it sucks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a phrase I’ve heard many times and honestly, in the context of creative work, it’s actually getting to a truth many people in the LLM world don’t want to admit.&lt;/p&gt;

&lt;p&gt;What is the purpose of art? It’s a way for a human being to reach across space and time, maybe centuries, to touch another human being. When an LLM or image model produces a string of text or a rectangle of pixels, at it’s most ideal form, it’s curated by a human being in order to transmit a copy to produce a few microliters of dopamine in someone else’s brain. At its center, there’s a hollowness that reveals a lot about how little creative work is valued.&lt;/p&gt;

&lt;p&gt;People like to bring up that director and artist David Lynch, shortly before his death, &lt;a href=&quot;https://www.forbes.com/sites/danidiplacido/2025/06/05/natasha-lyonne-sparks-backlash-after-quoting-david-lynch/&quot;&gt;told actor and writer Natasha Lyonne&lt;/a&gt; that AI is a tool, like a pencil, and it can be used for good. First off, she is an unreliable narrator in this case, and secondly, there’s not much context around what he meant by that. I think he could be right but only in a specific context.&lt;/p&gt;

&lt;p&gt;Oblique strategy cards and random number generators are two analogous technologies that can be helpful to an artist. You can roll a set of dice to decide what key to write your song in. You can &lt;a href=&quot;https://samplereality.com/2025/07/18/the-poetics-and-power-of-small-language-models/&quot;&gt;use procedural generation&lt;/a&gt; to create art. These are tools in the creative process that allows an artist to unblock themselves and create something that’s a more complete expression of what they want to express. But creating a novel or a digital painting from a statistical model is at best a novelty, not something that fulfills the aim of art in human society.&lt;/p&gt;

&lt;h2 id=&quot;ethics&quot;&gt;Ethics&lt;/h2&gt;

&lt;p&gt;I’m not a moral philosopher but I’d like to talk about the ethics of generative AI.&lt;/p&gt;

&lt;p&gt;LLMs are trained on public domain data as well as whatever text and images that LLM provider companies can get their hands on. It’s a shredded up version of all recorded history, other nonfiction, fiction, and everything in between. The training data is the blood, sweat, and tears of thousands of creators.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://arstechnica.com/tech-policy/2025/08/authors-celebrate-historic-settlement-coming-soon-in-anthropic-class-action/&quot;&gt;Many of the living creators of these works have expressed that they don’t want their works used in this way.&lt;/a&gt; The legality of LLM providers training models from these works is still being settled with &lt;a href=&quot;https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/&quot;&gt;lawsuits moving through the US justice system.&lt;/a&gt; But ethically, every inference you make using an LLM is against the wishes of many people whose works directly power it. It’s something for you to consider before you use it.&lt;/p&gt;

&lt;p&gt;The SF Bay Area rationalist community, whose philosophy strongly influences the tech CEOs in charge of this industry, generally subscribes to &lt;a href=&quot;https://plato.stanford.edu/entries/consequentialism/&quot;&gt;consequentialist ethics&lt;/a&gt;. They hold the belief that LLMs help people more than they harm people. The allure of utilitarianism’s “number go up” reasoning is certainly strong. But that doesn’t mean all other systems of ethics are invalid. A large portion of ethics and moral philosophy professors support other systems such as virtue ethics. Even if you believe that the majority of people have had their lives improved by LLMs, consider the possibility that it’s a harmful technology along another axis that a lot of people in the field of ethics care about.&lt;/p&gt;

&lt;h3 id=&quot;electricity-and-water-use&quot;&gt;Electricity and water use&lt;/h3&gt;

&lt;p&gt;Many skeptics bring up the power use and water use of training and inference of LLMs. I haven’t seen compelling evidence that it’s substantially increasing greenhouse gases or taking water away from communities. The news articles and opinion pieces I’ve seen have not made the case strongly enough for me to abstain from LLMs for those specific reasons.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about&quot;&gt;Here’s an article that makes a good faith effort to show that LLM and image generation usage at current levels isn’t something to be freaked out about.&lt;/a&gt; Video generation, on the other hand, is to be avoided because it requires orders of magnitude more energy.&lt;/p&gt;

&lt;p&gt;What gives me pause, however, is the amount of investment that cloud providers are putting into building new data centers. As these data centers come online, we may see these ill effects. This, in conjunction with every Google search making LLM inferences, are causes for concern about rises in energy consumption in the near future. I am hopeful, however, this will cause second-order effects that incentivize building small modular nuclear reactors or other clean energy investments like wind and solar.&lt;/p&gt;

&lt;h2 id=&quot;overall-thoughts&quot;&gt;Overall thoughts&lt;/h2&gt;

&lt;p&gt;I believe LLM usage will come to an equilibrium soon. It’s a powerful technology if integrated in the right way. The C-suites that force their workers to use LLMs are buying into hype and may be driving teams to use LLMs irresponsbily. Consumer usage of ChatGPT, Claude, and other free-tier LLMs are probably harmful to society in the long run. The LLM industry will see a market correction soon that I hope will set some of these things right.&lt;/p&gt;
</description>
        <pubDate>Mon, 25 Aug 2025 20:15:00 +0000</pubDate>
        <link>https://prem-kumar.com/2025/08/25/ai-thoughts/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2025/08/25/ai-thoughts/</guid>
      </item>
    
      <item>
        <title>Scaling Mastodon</title>
        <description>&lt;p&gt;Recently, Mastodon instances had to deal with an unprecedented number of new user signups. I found two instructive blog posts about how admins have been dealing with this influx.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://nora.codes/post/scaling-mastodon-in-the-face-of-an-exodus/&quot;&gt;Nora Codes, about how weirder.earth had to scale.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://blog.freeradical.zone/post/surviving-thriving-through-2022-11-05-meltdown/&quot;&gt;teknique, about how Free Radical solved scaling issues partially by throwing a Raspberry Pi into the mix.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both of them reveal the bottlenecks of Mastodon’s architecture. Some parts of the architecture can handle much larger rates of activity than others. Allocating resources intelligently allows the instance to scale much further with the same hardware.&lt;/p&gt;

&lt;p&gt;Both deal with the same problems in different ways, approaching it with different philosophies. It’s taught me a lot not just about sysadmin skills but also software architecture in a more general sense.&lt;/p&gt;
</description>
        <pubDate>Thu, 17 Nov 2022 19:58:00 +0000</pubDate>
        <link>https://prem-kumar.com/2022/11/17/mastodon-scaling/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2022/11/17/mastodon-scaling/</guid>
      </item>
    
      <item>
        <title>Reimplementing go&apos;s sync package</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://blogtitle.github.io/go-advanced-concurrency-patterns-part-3-channels/&quot;&gt;This blog post from 2019 titled &lt;em&gt;Go advanced concurrency patterns: part 3 (channels)&lt;/em&gt;&lt;/a&gt; – despite its unassuming name – is an exercise I found fun to read and enlightening.&lt;/p&gt;

&lt;h2 id=&quot;background&quot;&gt;Background&lt;/h2&gt;

&lt;p&gt;Go has a package called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sync&lt;/code&gt; in its standard library. It contains a few useful tools like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Mutex&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;WaitGroup&lt;/code&gt;, allowing for traditional concurrency patterns in your code. These are called the concurrency primitives. They’re definitely useful if you’re thinking in the traditional object-oriented way. At former roles, when I wrote in Java, I would have reached for these.&lt;/p&gt;

&lt;p&gt;However, a principle in the culture of Go is to “share by communicating.” You’ll see this repeated in many places. This means that when your instinct is to look for a concurrency primitive that will satisfy your requirements, see if there’s a way to &lt;a href=&quot;https://golangdocs.com/channels-in-golang&quot;&gt;use channels&lt;/a&gt; instead. Using channels to communicate data across goroutines (threads) makes software easier to reason about and debug. The handoffs (sending to and receiving from channels) are natural boundary points that help with clarity and simple design.&lt;/p&gt;

&lt;p&gt;I’m using the word &lt;em&gt;simple&lt;/em&gt; here the way &lt;a href=&quot;https://www.youtube.com/watch?v=LKtk3HCgTa8&quot;&gt;Rich Hickey uses it in this talk&lt;/a&gt;. Concurrent code is complex by default, and any opportunity to simplify is a gift. Channels are a natural simplifier.&lt;/p&gt;

&lt;p&gt;The above blog post takes the primitives in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sync&lt;/code&gt; package and implements each one using channels. This is a simple and effective demonstration of how expressive channels truly are. It’s hard to appreciate channels, which seem like not that big of a deal at first glance, until you see idiomatic code like this.&lt;/p&gt;

&lt;h2 id=&quot;other-resources&quot;&gt;Other resources&lt;/h2&gt;

&lt;p&gt;If you’re not so familiar with how concurrency works in Go, here are some resources I’ve found useful.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://teivah.medium.com/a-closer-look-at-go-sync-package-9f4e4a28c35a&quot;&gt;This post&lt;/a&gt; is pretty good at introducing you to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sync&lt;/code&gt; package in case you need a reference other than &lt;a href=&quot;https://pkg.go.dev/sync&quot;&gt;the excellent docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’d like a primer on concurrency patterns in Go, I recommend this book: &lt;a href=&quot;https://www.oreilly.com/library/view/concurrency-in-go/9781491941294/&quot;&gt;Concurrency in Go, by Katherine Cox-Buday&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;closing-thoughts&quot;&gt;Closing thoughts&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://blogtitle.github.io/go-advanced-concurrency-patterns-part-3-channels/&quot;&gt;&lt;em&gt;Go advanced concurrency patterns: part 3 (channels)&lt;/em&gt;&lt;/a&gt; has given me a new perspective on concurrency, not just in Go, but as a concept generally. I will continue to make fun of its unwieldy title, though.&lt;/p&gt;

&lt;p&gt;Even if you don’t program in Go, I’d recommend that you take a look at the entire &lt;a href=&quot;https://blogtitle.github.io/categories/concurrency/&quot;&gt;series of posts&lt;/a&gt;, each of which is interesting in its own way.&lt;/p&gt;
</description>
        <pubDate>Sun, 11 Sep 2022 11:57:00 +0000</pubDate>
        <link>https://prem-kumar.com/2022/09/11/implementing-sync/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2022/09/11/implementing-sync/</guid>
      </item>
    
      <item>
        <title>Optimizing for engagement metrics</title>
        <description>&lt;p&gt;This &lt;a href=&quot;https://medium.com/understanding-recommenders/whats-right-and-what-s-wrong-with-optimizing-for-engagement-5abaac021851&quot;&gt;post by three AI ethics researchers&lt;/a&gt; talks about the concept of engagement as it relates to recommender systems and lists some problems with it. For example, this post points out that internal research from Facebook shows that the most outrage-causing, anger-inducing, and polarizing content gets the most engagement, which causes this content to spread to even more users.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;One large review of “moral contagion” found “each message is 12% more likely to be shared for each additional moral-emotional word.” Other studies have found that divisive and extreme material is more likely to drive engagement. Internal research at Facebook has found that “no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ultimately, rather than arguing for getting rid of engagement as a metric, this post argues for scrutinizing whether a form of engagement is good or bad, and trying to avoid the bad kinds.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Despite these issues, optimizing for engagement still forms the core of content recommendation on most platforms. While it can be important to offer alternatives, engagement is just too useful a signal of multi-stakeholder value to give up. Instead, it may be possible to get better at distinguishing “good” from “bad” engagement.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I would have liked to see more recommendations in this post of what constitutes good engagement. They do provide a small handful of examples, but ultimately leave it up to the product designers to come up with contextually relevant implicit and explicit signals of engagement from users.&lt;/p&gt;
</description>
        <pubDate>Wed, 27 Apr 2022 11:49:00 +0000</pubDate>
        <link>https://prem-kumar.com/2022/04/27/optimizing-engagement/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2022/04/27/optimizing-engagement/</guid>
      </item>
    
      <item>
        <title>Digital gardening</title>
        <description>&lt;p&gt;I’ve come across the idea of digital gardens a few times in the last year. From &lt;a href=&quot;https://maggieappleton.com/garden-history&quot;&gt;this great introduction to the concept:&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Think of it as a spectrum. Things we dump into private WhatsApp group chats, DMs, and cavalier Tweet threads are part of our chaos streams - a continuous flow of high noise / low signal ideas. On the other end we have highly performative and cultivated artefacts like published books that you prune and tend for years.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;Gardening sits in the middle. It’s the perfect balance of chaos and cultivation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Even though this blog is not even close to a digital garden, I try to strike this balance when I post something here. I don’t want my posts to be completely polished products but I do want to put some effort into them. These blog posts represent me in the virtual world to a certain extent, especially because I’m not active on the standard social media platforms. On the other hand, I need these posts to have a certain amount of fluency and fluidity.&lt;/p&gt;

&lt;p&gt;It’s not a satisfying solution to &lt;a href=&quot;https://stackingthebricks.com/how-blogs-broke-the-web/&quot;&gt;the blogification of the web&lt;/a&gt;. However, keeping these considerations in mind when posting your writing to the web is a good place to start.&lt;/p&gt;
</description>
        <pubDate>Tue, 21 Dec 2021 18:15:00 +0000</pubDate>
        <link>https://prem-kumar.com/2021/12/21/digital-gardening/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2021/12/21/digital-gardening/</guid>
      </item>
    
      <item>
        <title>The fallout of Google&apos;s AI ethics meltdown</title>
        <description>&lt;p&gt;Recently, Google fired Dr. Gebru, one of the leaders in the AI ethics space, for some &lt;a href=&quot;https://venturebeat.com/2020/12/10/timnit-gebru-googles-dehumanizing-memo-paints-me-as-an-angry-black-woman/&quot;&gt;poorly-justified reasons&lt;/a&gt;. It was big news in the machine learning world because of the way it happened, who it happened to, and the way Google handled the aftermath. It was all a big mess.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://venturebeat.com/2020/12/16/from-whistleblower-laws-to-unions-how-googles-ai-ethics-meltdown-could-shape-policy/&quot;&gt;This feature in VentureBeat&lt;/a&gt; contains a list of predictions and suggestions on what could and might happen next in the AI ethics space. I feel this is a clear-headed list and lays out several paths forward.&lt;/p&gt;

&lt;p&gt;Unfortunately, I don’t think any of these paths is a long-term solution. Algorithmic bias is baked in to the way that we train models to the point where it takes a ton of extra work and effort to even recognize it. On top of that, the most well-funded organizations, which have the reach to effect the biggest populations, are incentivized to hide the negative externalities of the products they build. Layering these problems together makes it really difficult to address all of it at once. Legislation, whistle-blowing, and collective action may cut the Gordian knot and rein in these organizations, but the fundamental issues of machine learning still remain.&lt;/p&gt;
</description>
        <pubDate>Wed, 20 Jan 2021 22:55:00 +0000</pubDate>
        <link>https://prem-kumar.com/2021/01/20/google-policy/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2021/01/20/google-policy/</guid>
      </item>
    
      <item>
        <title>The paywall is a barrier</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://www.currentaffairs.org/2020/08/the-truth-is-paywalled-but-the-lies-are-free/&quot;&gt;This op-ed in Current Affairs magazine&lt;/a&gt; is quite opinionated, as op-eds in Current Affairs are wont to be, but it does start us off with a valid issue. What do we do in a landscape where the highest quality journalism and research is hidden behind paywalls that most people simply bounce off of?&lt;/p&gt;

&lt;p&gt;First, the author talks about news being something that most people prefer to get for free, which means they will only ever see low-quality journalism, partial to whoever pays the bills. But he goes one level deeper and talks about academic publishing.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Possibly even worse is the fact that so much academic writing is kept behind vastly more costly paywalls. A white supremacist on YouTube will tell you all about race and IQ but if you want to read a careful scholarly refutation, obtaining a legal PDF from the journal publisher would cost you $14.95, a price nobody in their right mind would pay for one article if they can’t get institutional access.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;On the other hand, pseudo-scholarhip is easy to find. Right-wing think tanks like the Cato Institute, the Foundation for Economic Education, the Hoover Institution, the Mackinac Center, the American Enterprise Institute, and the Heritage Foundation pump out slickly-produced policy documents on every subject under the sun. They are utterly untrustworthy—the conclusion is always going to be “let the free market handle the problem,” no matter what the problem or what the facts of the case. But it is often dressed up to look sober-minded and non-ideological.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The rest of the op-ed is a possible solution to this problem: rethinking the foundations of intellectual property law and building a Tower of Babel style universal repository of all media.&lt;/p&gt;

&lt;p&gt;Even though this may be impractical, or at the very least, extremely difficult to accomplish, it’s still an interesting idea to consider. I recommend this article because I think there’s value in thinking about ambitious ideas like this.&lt;/p&gt;
</description>
        <pubDate>Wed, 02 Sep 2020 23:50:00 +0000</pubDate>
        <link>https://prem-kumar.com/2020/09/02/truth-is-paywalled/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2020/09/02/truth-is-paywalled/</guid>
      </item>
    
      <item>
        <title>What should have happened at the tech antitrust hearing</title>
        <description>&lt;p&gt;The &lt;a href=&quot;https://www.nytimes.com/2020/07/29/technology/big-tech-hearing-apple-amazon-facebook-google.html&quot;&gt;hearing&lt;/a&gt; this week where Pichai, Zuckerberg, Cook, and Bezos appeared (virtually) before a congressional subcommittee for anti-trust was a big event. However, I don’t think it was big enough.&lt;/p&gt;

&lt;p&gt;Firstly, the format didn’t lend itself to substantial questions. These congressional hearings are more theatrical than substantial, anyway. Having four CEOs appear at the same time was bad, and doing it virtually was worse.&lt;/p&gt;

&lt;p&gt;Secondly, even though there is bipartisan support for breaking up these monopolies, the two parties have their own goals. The Democrats want a competitive marketplace, and the Republicans want to show that they feel like they are being persecuted somehow.&lt;/p&gt;

&lt;p&gt;A hearing may start to air the public’s grievances with these companies, but it doesn’t mean much if it’s not immediately followed up by action. I really hope that Congress keeps up its momentum and starts to pass some legislation to limit the powers of these companies.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.profgalloway.com/fire-fawning&quot;&gt;I like Prof Scott Galloway’s list of questions that Congress should have asked.&lt;/a&gt; This article is a goldmine of data about how massive and impactful these monopolies really are. I also recommend &lt;a href=&quot;https://www.profgalloway.com/dawg-on-the-wall&quot;&gt;his followup&lt;/a&gt; about what actually happened at the hearing.&lt;/p&gt;
</description>
        <pubDate>Fri, 31 Jul 2020 21:33:00 +0000</pubDate>
        <link>https://prem-kumar.com/2020/07/31/antitrust-hearing/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2020/07/31/antitrust-hearing/</guid>
      </item>
    
      <item>
        <title>Lessons from the recent Yann LeCun controversy</title>
        <description>&lt;p&gt;Recently, deep learning researcher Yann LeCun wrote a tweet that sparked an important debate in the academic community. As someone who does research and development in the industry, I can say that this discussion reaches beyond academia and affects all of us.&lt;/p&gt;

&lt;p&gt;At its core, it started out as a debate between whether a model’s architecture (and related design decisions) or the data its trained on ought to be the object of our attention when talking about biased results in the real world. Over time, however, it became a referendum on how this debate is carried out and the way in which Professor LeCun communicates with those who disagree with him in general.&lt;/p&gt;

&lt;p&gt;This is especially true because of the statements that Professor LeCun made about the responsibilities of researchers vs. engineers:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;The consequences of bias are considerably more dire in a deployed product than in an academic paper. - @ylecun, 2020/06/21&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://thegradient.pub/pulse-lessons/&quot;&gt;Here is a thorough breakdown of the main spine of the discussion.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s a well-written post that’s worth a read. It presents multiple sides and still stays focused on the main throughline: what are our responsibilities as technologists and how do we conduct ourselves to inflict the least amount of harm on each other and the world? It is still absolutely an open question, and I’m not sure when we’ll get a satisfying answer. Our field is comparatively young, so I am hopeful that we will gain enough maturity so we won’t have to keep having these conversations.&lt;/p&gt;
</description>
        <pubDate>Sat, 11 Jul 2020 00:57:00 +0000</pubDate>
        <link>https://prem-kumar.com/2020/07/11/pulse-lessons/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2020/07/11/pulse-lessons/</guid>
      </item>
    
      <item>
        <title>A profile of EU regulator Margrethe Vestager</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://foreignpolicy.com/2020/07/04/margrethe-vestager-is-still-coming-for-big-tech/&quot;&gt;Take a look at this article about Margrethe Vestager.&lt;/a&gt; In case you don’t know who she is, she has the job of representing the EU when holding tech monopolies accountable&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Specifically, this part stood out to me:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Our work here is still a work in progress. In these individual cases, the fine is a punishment for past illegal behavior. The second element in such a decision is a cease-and-desist order. And then we have the third element, which is a more restorative approach. Because what we have seen in Google’s case is that even when the illegal behavior stops, that doesn’t necessarily open the markets. Once you own a particular market, it takes a lot of effort to open that market for competitors. And this is why we also need a regulatory approach.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The way trust-busting works is not just to interrogate whether a given company has a monopoly on a market and come up with punitive measures. It’s to set up a sustainable path forward so the market is able to support multiple competing players. This is not an easy task, and I’m interested to see what happens next.&lt;/p&gt;

&lt;p&gt;If you’d like a more comprehensive view of what’s going on with the European Union and their relationship to the tech industry, take a look at &lt;a href=&quot;https://techcrunch.com/2019/06/15/where-is-the-eu-going-on-tech-and-competition-policy/&quot;&gt;this TechCrunch article.&lt;/a&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Read more about Vestager on &lt;a href=&quot;https://en.wikipedia.org/wiki/Margrethe_Vestager&quot;&gt;her Wikipedia page.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Mon, 06 Jul 2020 12:12:00 +0000</pubDate>
        <link>https://prem-kumar.com/2020/07/06/eu-regulator-vestager/</link>
        <guid isPermaLink="true">https://prem-kumar.com/2020/07/06/eu-regulator-vestager/</guid>
      </item>
    
  </channel>
</rss>
