AI disclosure & my thoughts

In the documentation for my website, I disclose my use of AI/LLMs. I feel it’s important to be honest about how and where we use AI. I think people should:

  • Accurately represent their professional capabilities
  • Consider and define their ethical boundaries
  • Continually practice affirming those boundaries

You can expect to see a disclosure each time I use an LLM for a project. Nothing generated by an LLM will be presented devoid of conext. Facts and figures will be sourced from reputable publications.

Q&A

How do I use AI?

I learn technical skills best via total immersion, when theory is immediately combined with praxis and applied to a problem I’m motivated to solve. I use LLMs more like a personalized YouTube tutorial than a “Do everything for me” button.

In that context, I use LLMs as a learning assistant, to provide context when documentation is intended for more experienced developers. I don’t rely on it as a primary source - I corroborate its output and feed it up-to-date documentation. I think carefully about security implications. I read every line of code. I test. I don’t copy/paste everything, often reading and writing the code directly to build familiarity with syntax and structure.

I never use AI to create or review writing.


I use AI…

As a more experienced but error-prone dev

I generally use AI when the following criteria are met:

  • I’m trying to achieve a moderately complex goal, AND
  • Organically achieving that knowledge conflicts with personal interest, time constraints, etc, AND/OR
    • … it lies wholly outside my experience base in that domain, OR
    • … it requires prerequisite experience in multiple domains.

To generate specific examples

The criteria described above usually relate to personal coding projects. I don’t consider myself a developer and I have no ambitions to become one. I will ask an LLM to generate example snippets, explain unfamiliar syntax, or modify official examples to suit a different use-case.

For probing boundaries around blind spots

I also use AI when I need to isolate an unknown unknown / blind spot and I don’t have time for hours of Google-fu. If I can’t define what I need, I can’t accurately query reputable sources. When Googling “what is this thing that I don’t have words for” or poking around in textbooks doesn’t work, I’ve found LLMs to be fast and useful.

Does it work well?

Yes, the way I use it works well most of the time.

Do you like LLMs?

Not really. I often develop strong bonds with my tools and other inanimate objects. I talk to them a lot but they don’t usually say anything back. There’s an uncanny valley ick factor that contraindicates my bond development with a talking machine.

Do you have ethical concerns?

Yes. Ethical, environmental, and sociological. I’m interested in discussing them at length and am willing to continually re-evaluate if, when and how I use LLMs and which products I use.

Any other takeaways?

  • Managing an LLM is its own kind of work.
  • Using AI for code helps me achieve goals that would otherwise not be feasible.
  • Using LLMs, especially for code, has made me more appreciative of developer docs. There is no substitute for high-quality documentation straight from the source.
  • I’m interested in trying a local, offline model which I think would be more personal and carry less baggage.