Skip to content

Commit

Permalink
prez: Content WIP
Browse files Browse the repository at this point in the history
  • Loading branch information
vorburger committed Sep 21, 2024
1 parent c1ffefe commit b687a7b
Show file tree
Hide file tree
Showing 8 changed files with 182 additions and 34 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/prez/sli.dev/public/images/h100.webp
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
216 changes: 182 additions & 34 deletions docs/prez/sli.dev/slides.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,8 @@ backgroundSize: contain
# ๐Ÿซข

<!--
How does this work? What does this mean for the future?
How the heck does this work?! And what does all this mean for the future? BTW, you can try this out for yourself on gemini.google.com ...
Prompt: _Can you make this sound cooler?_
Expand Down Expand Up @@ -89,18 +90,28 @@ https://www.jasondavies.com/wordcloud/

# History ๐Ÿฏ

<br>
<br/>

_Artificial Intelligence_ (AI) arguably [started in ๐Ÿบ antiquity](https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence).
_Artificial Intelligence_ (AI) arguably [an ๐Ÿบ ancient dream](https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence).

_[Modern day AI](https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence)_ from ~1960s, with emergence of Computer Science.

Initially ๐Ÿ”ฃ _"[symbolic](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence)"_ with _"rules",_ e.g. _Expert Systems_ ร  la [Cyc](https://en.wikipedia.org/wiki/Cyc); AI โ„๏ธ Winters.
Initially ๐Ÿ”ฃ _"[symbolic](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence)"_ with _"rules",_ e.g. _Expert Systems_ ร  la [Cyc](https://en.wikipedia.org/wiki/Cyc).

AI โ„๏ธ Winter/s.

<br/>

_Machine Learning_ (ML), with _Deep Learning_ for _Generative AI_ are subfields of AI...

_Machine Learning_ (ML), with _Deep Learning_ for _Generative AI_ are subfields of AI - with a different take; why?
...which are different - why?

<!--
To give you a timeline, a scientific paper that is often referred to as a breakthrough milestone is the Transformer's by Researchers at Google, published (only) in 2017.
AI arguably started in antiquity - the 1st link is to an interesting "AI history" sort of table on Wikipedia, which mentions e.g. ancient Greek myth of Talos, the giant automaton in Crete from ca. 700 BC, or the jewish Golem, or the Takwin of Muslim alchemists from 8th century, or the Homunculus of 16th century European alchemists such as Paracelsus, or perhaps even the Tulpa of Tibetan Buddhism's, and later Theosophists.
...
To give you a current day timeline, a scientific paper that is often referred to as a breakthrough milestone is the Transformer's by Researchers at Google, published (only) in 2017.
But let's take a quick detour...
Expand All @@ -122,7 +133,7 @@ layout: fact

<br>

Programming is to give computers (very) precise instructions, AKA _"Code",_ for what you want them to do, like:
Programming gives computers precise instructions, like:

- Print `hello, world`
- Variable `i = 7`
Expand All @@ -133,6 +144,10 @@ Written in a computer language - try out [Scratch](https://scratch.mit.edu), it'

<small>(Or C++ or Java & Kotlin or C# or Python or JavaScript & TypeScript or Go or Rust, etc.)</small>

<!--
Programming, AKA coding, gives computers precise instructions, very precise, which we call,for what you want them to do.
-->

---
layout: fact
---
Expand Down Expand Up @@ -165,32 +180,62 @@ Quick show of hands... who a) ... who b) โ“ ๐Ÿ˜†

# Machine Learning (ML)

Treat computers like ๐Ÿ‘ถ babies! Probabilistics:

- _โ™Ÿ๏ธ Chess_ through _trial & error_ - instead _rules_
- _๐Ÿ‘€ Vision_ with example images - instead _algorithms_
- _๐Ÿ“ธ Camera ๐Ÿ“ฑ quality_ similarly - instead _filters_
- _โœ๏ธ Grammar Checking_ from examples - instead _language grammar_
- _๐ŸŒ Machine translation_ by "reading" human translated books - instead _2 grammars_
- _๐Ÿ’ฌ Large Language Models_ (LLM) by _"reading"_ **lots** of text

<br/>

Idea is not new ([backpropagation](https://en.wikipedia.org/wiki/Backpropagation) ~1980s?)...

...but only quite recently (~2010s+ ?) turned out to become more ๐Ÿš€ viable at scale, due to the

availability of _Big Data_ and massive storage & _Super Computer_ infrastructures in _โ˜๏ธ Clouds._

<!--
The idea of ML is ~ just to treat computers as ๐Ÿ‘ถ babies, instead of _programming_ them! For example:
- Chess through _trial & error_ - instead _rules_
- _Vision_ with example images - instead of _algorithms_
- _Machine translation_ by "reading" human translated books - instead grammar
- _Large Language Models_ (LLM) by "reading" _A LOT_ of text, and then reply to prompts
...
The basic idea is not that new ([backpropagation](https://en.wikipedia.org/wiki/Backpropagation) ~1980s?), but it turned out to be a lot more ๐Ÿš€ _"fun"_ with the emerging increasing availability of _Big Data_ and massive cloud storage & super compute cluster infrastructures, starting ~2010s.
LLMs basically do the same to be able to reply to prompts where you chat with them and ask them questions.
The basic idea is not that new ([backpropagation](https://en.wikipedia.org/wiki/Backpropagation) ~1980s?)...
...but only recently (~2010s+ ?) it suddenly turned out to be a lot more ๐Ÿš€ interesting, due to the
emerging increasing availability of _Big Data_ and massive storage & _Super Computer_ infrastructures in _โ˜๏ธ Clouds._
-->

---
layout: center
---

# Magic?

Is ML ๐Ÿช„ magic? Not at all... the basic idea is really quite simply, actually! To illustrate:
Is ML ๐Ÿช„ magic? Not at all... basic idea is quite simply, actually! To illustrate:

๐Ÿซ Remember? `y = a*x + b` ๐Ÿงฎ

Given a _training data set_ of some _points_ (x,y) representing car fuel efficiency,
<br/>

where X is car weight an Y is KMs per Liter of Gas...
Given a _training data set_ of e.g. car fuel efficiency `(x,y)` _points,_

...find `a` and `b` - and that's a (2D) ML model, of 2 parameters!
where `X` is a ๐Ÿš— car's ๐Ÿ‹๏ธ weight, and `Y` is its KMs per Liter of Gas (or ๐Ÿ”‹)...

Given car's _weight_ (X), you can _predict_ its gas consumption (Y).
...find "best" `a` and `b` - that's a _"model",_ of 2 parameters!

<br/>

Given new car's _weight_ (X), you can _predict_ its consumption (Y).

<!--
Is ML ๐Ÿช„ magic? Not at all... the basic idea is really quite simply, actually! To illustrate:
-->

---
layout: image
Expand All @@ -211,7 +256,7 @@ backgroundSize: contain
<!--
We literally just make a computer program try out values for a and b, to try to make the model have "good accuracy" - in this case, that just means "making the red line as close to those green points as it can be".
A large language model really is (kind of) similar to this - except that instead of have 2 parameters, for a and b, it has more - many more...
E.g. a large language model really is (kind of) similar to this - except that instead of have 2 parameters, for a and b, it has more - many more...
Copyright ยฉ 2020-2021 Gajanan Bhat. All rights reserved.
Expand All @@ -234,6 +279,28 @@ PS: This is technically mathematically not entirely accurate (because it's not r

---

# ๐Ÿง  Your Brain is a Biological Neural Network

<div style="background:white; display: flex; justify-content: center; align-items: center;">

![Neuron](/images/main-qimg-bc7fe92df7c9be5e697169f127a2dd8e.webp)

</div>

<!--
If in addition to remembering Linear Regression from your High School Math class you also remember a little bit of your biology 101, then perhaps this image is familiar to you? It's
-->

---
layout: image
image: /videos/playground.tensorflow.org.webp
backgroundSize: contain
---

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Tensorflow Playground](https://playground.tensorflow.org)

---

# GenAI ML Models

GenAI ML Models really are just such parameters...
Expand All @@ -244,16 +311,20 @@ E.g. [Google's open source Gemma](https://ai.google.dev/gemma) (v2) has **27 bil

(And [Google's Gemini Models](https://deepmind.google/technologies/gemini/) are even bigger.)

<br/>

๐Ÿ’ฌ Words to 1๏ธโƒฃ2๏ธโƒฃ3๏ธโƒฃ numbers & ๐Ÿงฎ _prediction!_

Or ๐Ÿ—ฃ๏ธ voice. Or ๐Ÿ–ผ๏ธ images. Or ๐ŸŽฅ videos.

<br/>

Often _Pipelines_ of \* N models.

<!--
Instead of just 2 or 3 such parameters, as seen previously.
In reality often not just 1 model, but Pipelines with Workflows connecting several models.
In reality often not just 1 model, but Pipelines with Workflows connecting several models; e.g. LangChain's LangGraph in FLOSS.
-->

---
Expand Down Expand Up @@ -296,33 +367,71 @@ Pictures of digits, cats & dogs, or texts...
No โ˜๏ธ Cloud (DC) is more ๐Ÿ” #private & #fast.
-->

---
layout: center
---

# Training Data
# Data?

<br>

You need a massive amount of text of images or video to train Large Models with billions of parameters...
You need a **massive** amount of text & images & video to train Large Models with billions of parameters...

... and there are some interesting open questions around this.
ยฉ๏ธ ยฎ โ„ข ๐Ÿ‘ฉโ€โš–๏ธ โ“

---
layout: center
---

# Energy?

Specialized Hardware (GPU, TPU, NPU) - for parallel ๐Ÿค“ matrix computation (similar to ๐ŸŽฎ gaming graphics).

Both training (more), but also inference (less); use ๐Ÿ”‹ energy... ๐Ÿข ๐Ÿง ?

<br/>

Who pays? You, eventually, e.g. with:

- ๐ŸŽฅ Ads

- ๐ŸชŸ Licenses

- ๐Ÿค‘ Pricey ๐ŸŽ HW ๐Ÿ“ฑ

- ๐Ÿ’ธ Subscriptions

---
layout: image
image: /images/TPU_v5L_Pod_-_Front_View_-_Web.max-2600x2600.jpg
backgroundSize: contain
---

---
layout: center
---

# Cloud? Open Source?

ChatGPT & Google Gemini etc. run in the โ˜๏ธ Cloud.

You can download e.g. Google's Gemma, Meta's Llama, Mistral, etc. to run them yourself "at home".
You can download e.g. Google's โ™Š Gemma, Meta's
<br/>๐Ÿฆ™ Llama, ๐Ÿ‡ซ๐Ÿ‡ท Mistral, et al., to run them yourself.

Do try e.g. ๐Ÿฆ™ [Ollama](https://ollama.com) to run an [LLM @ Home on a GPU](/~https://github.com/vorburger/vorburger.ch-Notes/blob/develop/ml/ollama1.md).
Do try e.g. [Ollama](https://ollama.com) to run an [LLM @ Home ๐Ÿก](/~https://github.com/vorburger/vorburger.ch-Notes/blob/develop/ml/ollama1.md)
<br/>(e.g. AMD RX 7600 = $400 or NVIDIA H100 = $30k!).

<br/>

PS: Training is still proprietary - it's hard to DIY.
LM training is still proprietary - it's hard to DIY.

(But you can train smaller models, or _fine tune.)_

---
layout: image
image: /images/h100.webp
backgroundSize: contain
---

---

Expand All @@ -334,13 +443,39 @@ Large Models might know about "the world", not (yet) "your world"... but:

- _Fine Tuning_ is another ML technique to efficiently adapt a previously pre-trained model with new data.

- _"Workflows"_ (e.g. LangChain's LangGraph, et al.)
- Coming to ๐Ÿค‘ device gadgets near you: ๐ŸชŸ Copilot+ ๐Ÿ’ป PCs, ๐Ÿ“ฑ Google Pixel 8, ๐ŸŽ Apple Intelligence, etc.

- Try e.g. Google Gemini Workspace Extensions...

---
layout: image
image: /images/notebooklm.google.com.png
backgroundSize: contain
---

---
layout: two-cols
---

# E.g. Gemini Extensions

<br/>

![Google Gemini Extensions](/images/google-gemini-extensions.png)

::right::

![Google Gemini Gmail Email Access](/images/google-gemini-conference-email.png)

---
layout: image
image: /images/star-trek-scotty.gif
---

<!--
Scotty tries to talk into the mouse of a computer in Star Trek... this used to be a joke, but... we're there now.
-->

---

# The Future?
Expand Down Expand Up @@ -376,15 +511,25 @@ image: /images/KITT_Speaks_Spanish_Knight_Rider-ezgif.com-video-to-gif-converter

# Applications

- ๐Ÿ’ฌ Chat Bots, [Image](https://huggingface.co/spaces/nyxai-lab/perspectives-on-ai) & [Movie Generation](https://www.youtube.com/@aicinemaof)
- ๐Ÿ’ป Coding: E.g. GitHub Copilot (and others)
- ๐Ÿข Productivity: E.g. documents, email summary; or Meeting Minutes transcription
- โš•๏ธ **Health:** E.g. [Google's Health AI](https://ai.google/discover/healthai/) for breast cancer ๐Ÿฉป [screening](https://www.youtube.com/watch?v=CzgMUVPduZA) or expanding access to ultrasound
- โš›๏ธ **Science:** E.g. Google DeepMind's ๐Ÿงฌ [AlphaFold](https://www.youtube.com/watch?v=gg7WjuFs8F4) (open [access](https://alphafoldserver.com/)), or ๐Ÿงฎ [Math](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/) ๐Ÿ… Olympiad
- ๐ŸŽ’ **Education:** E.g. [Khan Academy's Khanmigo](https://www.youtube.com/watch?v=hJP5GqnTrNo&t=4s), or Homework with [Google Lens](https://lens.google)
- Deepfakes & misinformation spam - **and** their detection
<br/>

๐Ÿ’ฌ Chat Bots, [Image](https://huggingface.co/spaces/nyxai-lab/perspectives-on-ai) & [Movie Generation](https://www.youtube.com/@aicinemaof)

๐Ÿ’ป Coding: E.g. GitHub Copilot (and others)

๐Ÿข Productivity: E.g. documents, email summary; or Meeting Minutes transcription

โš•๏ธ **Health:** E.g. [Google's Health AI](https://ai.google/discover/healthai/) for breast cancer ๐Ÿฉป [screening](https://www.youtube.com/watch?v=CzgMUVPduZA) or expanding access to ultrasound

Is (some of) this _"creative"?_ You tell me...
โš›๏ธ **Science:** E.g. Google DeepMind's ๐Ÿงฌ [AlphaFold](https://www.youtube.com/watch?v=gg7WjuFs8F4) (open [access](https://alphafoldserver.com/)), or ๐Ÿงฎ [Math](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/) ๐Ÿ… Olympiad

๐ŸŽ’ **Education:** E.g. [Khan Academy's Khanmigo](https://www.youtube.com/watch?v=hJP5GqnTrNo&t=4s), or Homework with [Google Lens](https://lens.google)

๐ŸงŸ Deepfakes & ๐Ÿ™Š misinformation spam - **and** their ๐Ÿ•ต๏ธโ€โ™€๏ธ detection

<br/>

Is (some of) this _"creative"?_ Does ๐Ÿงž it _"think"?_ What **really** is creativity and thinking... ๐Ÿค”

<!--
AlphaFold "protein folding" breakthrough unlocking research of new medicines
Expand All @@ -395,6 +540,8 @@ https://www.theguardian.com/technology/article/2024/jul/25/google-deepmind-takes
Khan also e.g. https://www.youtube.com/watch?v=_EfEoSP7oYQ (after aforementioned TED Talk)
TODO Try Google Lens with Homework & screenshot it
https://c2pa.org for GenAI?
-->

---
Expand Down Expand Up @@ -451,6 +598,7 @@ Machine Learning is a lot of fun! Get started with exploring it today:

1. [`gemini.google.com`](https://gemini.google.com) to learn _"Prompt Engineering"_
1. [Google AI Explorables](https://pair.withgoogle.com/explorables/)
1. [Tensorflow Playground](https://playground.tensorflow.org)

For developers:

Expand Down

0 comments on commit b687a7b

Please sign in to comment.