new posts!

This commit is contained in:
Tom 2025-01-18 22:42:22 +00:00
parent 1f5000100c
commit 308ad0cd5e
24 changed files with 10382 additions and 110 deletions

View File

@ -0,0 +1,129 @@
---
title: Replacing an image colour with transparency
layout: post
excerpt: What happens if you convert an RGB image to RGBA by pretending it was sitting on a white background?
draft: true
images: /assets/blog/alpha_test
thumbnail: /assets/blog/alpha_test/thumbnail.png
social_image: /assets/blog/alpha_test/thumbnail.png
# The alt text for both images.
alt: An image of Mixtela's latest project, a pendant with a fluid simulation running on a LED matrix.
image_class: no-dim
mathjax: true
---
I was looking at [Mixtela's latest project][mixtelas_project] and admiring how nicely his images blend with the background of the page. He has a simple white background and his images all have perfect white backgrounds with just a little hint of a shadow.
<figure>
<img src="{{page.images}}/original.jpg" class = "no-dim">
<figcaption markdown=1> An image of [Mixtela's fluid simulation pendant][mixtelas_project].
</figcaption>
</figure>
I think he achieves this through by simply doing very good photography, he probably photographs the object under good lighting in a white booth type thing. I suspect he also adjusts the white balance in post because the white background pixels are all exactly `(255,255,255)`.
But my site has a slightly off white background and it also has a dark mode. Is there some way I could make a similar image that adapts to the background colour?
Well I can kinda think of a crude way. What if we tried to invert the alpha blending process to derive an RGBA image from an RGB image and a background colour?
For a particular pixel of the image, the output pixel $c_{out}$ is just the linear combination of the background $b$ and foreground $f$ colours weighted by the alpha channel $\alpha$:
$$ c_{\text{out}} = f \alpha + b (1 - \alpha) $$
I'm gonna fix the output colour $c_{\text{out}}$ to be the rgb colour of my source image and the background $b$ as white. This gives us:
$$ f = \left( c_{\text{out}} - b (1 - \alpha) \right) / \alpha $$
Now we have to choose alpha for every pixel. Note that's it's not an entirely free choice, any pixel that isn't white in the source image has a maximum alpha we can set before we would start getting negative values in the solution.
For white the maximum value of alpha turns out to be just the minimum of the r, g and b channels. For a different choice of background colour it would be the minimum of the three channels of $c_{\text{out}} / b$.
Logically some parts of this image should not be transparent, the actual pendant itself is clearly made out of metal so you wouldn't be able to see through it. The shadow on the other hand would make sense as a grey colour with some transparency.
However I'm just going to see what I get if I set alpha to the maximum possible value for each pixel.
```python
import numpy as np
from PIL import Image
from pathlib import Path
input_path = Path("pendant-complete1.jpg")
.expanduser()
# convert to 64bit floats from 0 - 1
color = np.asarray(Image.open(input_path)
.convert("RGB"))
.astype(np.float64) / 255.0
# The amount of white in each pixel
white = np.array([1.,1.,1.])
alpha = 1 - np.min(color, axis = 2)
premultiplied_new_color = color \
- (1 - alpha)[:, :, None] \
* white[None, None, :])
# This does new_color = premultiplied_new_color / alpha
# but outputs 0 when alpha = 0
new_color = \
np.divide(
premultiplied_new_color,
alpha[:, :, None],
out=np.zeros_like(premultiplied_new_color),
where = alpha[:, :, None]!=0
)
new_RGBA = np.concatenate(
[new_color, alpha[:,:,None]],
axis = 2)
img = Image.fromarray((new_RGBA * 255)
.astype(np.uint8), mode = "RGBA")
img.save("test.png")
```
And here are the results, switch the page to dark mode to see more of the effect. With a light, slightly off-white background the transparent image looks very similar to the original but now nicely blends into the background.
Hit this button to switch to night mode:
<button class="toggle-button js-mode-toggle" aria-label="Night Mode Toggle">
<span class="toggle-button__icon" aria-hidden="true"></span>
</button>
<figure class="multiple">
<img src="{{page.images}}/original.jpg" class = "no-dim">
<img src="{{page.images}}/white_subtracted.png" class = "no-dim">
<img src="{{page.images}}/white_subtracted.png" class = "brighten">
<img src="{{page.images}}/ai_subtracted.png">
<figcaption> Here are some images, (top left) original, (top right) white subtracted and replaced with alpha, (bottom left) same but brightened in dark mode, (bottom right) cutout based background removal tool (loses shadow)</figcaption>
</figure>
I quite like the effect, and because we chose to make all the pixels as transparent as possible, it has the added bonus that the image dims a bit in dark mode.
## Addendum
Harking back to my other post about Einstein summation notation, if we have in image with an index for height $$h$$ and width $$w$$ and colour channel $$c$$ that runs over `r,g,b`, we can write these equations as:
$$ c_{\text{out}} = f_{hwc} \alpha_{hw} + b_{hw} (1 - \alpha_{hw}) \;\; \text{(No sum over} h, g)$$
so instead of
```python
premultiplied_new_color = color - \
(1 - alpha)[:, :, None] * white[None, None, :]
```
we could also write:
```python
premultiplied_new_color = color - np.einsum(
"xy, i -> xyi", (1 - alpha), white
)
```
...which is probably not that much simpler for this use case but when it becomes more helpful when you're not just doing elementwise operations and broadcasting.
[mixtelas_project]: https://mitxela.com/projects/fluid-pendant

View File

@ -0,0 +1,180 @@
---
title: Undexpected Depths
layout: post
excerpt: Did you know iPhone portrait mode HEIC files have a depth map in them?
draft: true
assets: /assets/blog/heic_depth_map
thumbnail: /assets/blog/heic_depth_map/thumbnail.png
social_image: /assets/blog/heic_depth_map/thumbnail.png
alt: An image of the text "{...}" to suggest the idea of a template.
head: |
<script async src="/node_modules/es-module-shims/dist/es-module-shims.js"></script>
<script type="importmap">
{
"imports": {
"three": "/node_modules/three/build/three.module.min.js",
"three/addons/": "/node_modules/three/examples/jsm/",
"lil-gui": "/node_modules/lil-gui/dist/lil-gui.esm.min.js"
}
}
</script>
<script src="/assets/js/projects.js" type="module"></script>
---
You know how iPhones do this fake depth of field effect where they blur the background? Did you know that the depth information used to do that effect is stored in the file?
```python
# pip install pillow pillow-heif pypcd4
from PIL import Image, ImageFilter
from pillow_heif import HeifImagePlugin
d = Path("wherever")
img = Image.open(d / "test_image.heic")
depth_im = img.info["depth_images"][0]
pil_depth_im = depth_im.to_pillow()
pil_depth_im.save(d / "depth.png")
depth_array = np.asarray(depth_im)
rgb_rescaled = img.resize(depth_array.shape[::-1])
rgb_rescaled.save(d / "rgb.png")
```
<figure class="two-wide">
<img src="{{page.assets}}/rgb.png">
<img src="{{page.assets}}/depth.png">
<figcaption> A lovely picture of my face and a depth map of it. </figcaption>
</figure>
Crazy! I had a play with projecting this into 3D to see what it would look like. I was too lazy to look deeply into how this should be interpreted geometrically, so initially I just pretended the image was taken from infinitely far away and then eyeballed the units. The fact that this looks at all reasonable makes me wonder if the depths are somehow reprojected to match that assumption. Otherwise you'd need to also know the properties of the lense that was used to take the photo.
This handy `pypcd4` python library made outputting the data quite easy and three.js has a module for displaying point cloud data. You can see that why writing numpy code I tend to scatter `print(f"{array.shape = }, {array.dtype = }")` liberally throughout, it just makes keeping track of those arrays so much easier.
```python
from pypcd4 import PointCloud
n, m = np_im.shape
aspect = n / m
x = np.linspace(0,2 * aspect,n)
y = np.linspace(0,2,m)
rgb_points = np.array(rgb_rescaled).reshape(-1, 3)
print(f"{rgb_points.shape = }, {rgb_points.dtype = }")
rgb_packed = PointCloud.encode_rgb(rgb_points).reshape(-1, 1)
print(f"{rgb_packed.shape = }, {rgb_packed.dtype = }")
print(np.min(np_im), np.max(np_im))
mesh = np.array(np.meshgrid(x, y, indexing='ij'))
xy_points = mesh.reshape(2,-1).T
print(f"{xy_points.shape = }")
z = np_im.reshape(-1, 1).astype(np.float64) / 255.0
m = pil_depth_im.info["metadata"]
range = m["d_max"] - m["d_min"]
z = range * z + m["d_min"]
print(f"{xyz_points.shape = }")
xyz_rgb_points = np.concatenate([xy_points, z, rgb_packed], axis = -1)
pc = PointCloud.from_xyzrgb_points(xyz_rgb_points)
pc.save(d / "pointcloud.pcd")
```
Click and drag to spin me around. It didn't really capture my nose very well, I guess this is more a foreground/background kinda thing.
<canvas style ="width: 100%;" id="canvas-id-1"></canvas>
<script type="module">
import * as THREE from "three";
import { OrbitControls } from "three/addons/controls/OrbitControls.js";
import { DragControls } from "three/addons/controls/DragControls.js";
import { PCDLoader } from 'three/addons/loaders/PCDLoader.js';
import { GUI } from 'three/addons/libs/lil-gui.module.min.js';
let canvas, scene, camera, renderer, gui, orbitControls;
const d = 1;
init();
function init() {
canvas = document.getElementById('canvas-id-1');
const loader = new PCDLoader();
scene = new THREE.Scene();
loader.load( '{{page.assets}}/pointcloud.pcd', function ( points ) {
points.geometry.center();
// points.geometry.rotateZ( -Math.PI );
// points.geometry.rotateY( Math.PI/2 );
points.geometry.rotateZ( -Math.PI/2 );
// points.geometry.rotateY( Math.PI/2 );
points.name = 'depth_map';
scene.add( points );
scene.add( new THREE.AxesHelper( 1 ) );
points.material.color = new THREE.Color(0x999999);
points.material.size = 0.001
render();
} );
// --- Scene ---
const aspect = canvas.clientWidth / canvas.clientHeight;
camera = new THREE.PerspectiveCamera( 30, aspect, 0.01, 40 );
camera.position.set( 0, 0, 5);
camera.lookAt(0, 0, 0);
// --- Renderer (use the existing canvas) ---
renderer = new THREE.WebGLRenderer({ alpha: true, canvas: canvas, antialias: true });
renderer.setSize(canvas.clientWidth, canvas.clientHeight,);
// --- OrbitControls ---
orbitControls = new OrbitControls(camera, renderer.domElement);
orbitControls.addEventListener( 'change', render ); // use if there is no animation loop
// controls.minDistance = 0.5;
// controls.maxDistance = 10;
// orbitControls.enableRotate = false;
// orbitControls.enablePan = false;
// orbitControls.enableDamping = true;
// orbitControls.dampingFactor = 0.05;
// --- Lights ---
const ambientLight = new THREE.AmbientLight(0xffffff, 0.7);
scene.add(ambientLight);
const dirLight = new THREE.DirectionalLight(0xffffff, 0.7);
dirLight.position.set(5, 5, 10);
scene.add(dirLight);
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize() {
const aspect = canvas.clientWidth / canvas.clientHeight;
camera.left = -d * aspect;
camera.right = d * aspect;
camera.top = d;
camera.bottom = -d;
camera.updateProjectionMatrix();
renderer.setSize(canvas.clientWidth, canvas.clientHeight);
}
function render() {
renderer.render(scene, camera);
}
</script>

View File

@ -105,7 +105,11 @@ A table:
## Line Element ## Math
Stack overflow has a nice [mathjax summary](https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference)
List of mathjax symbols [here](https://docs.mathjax.org/en/latest/input/tex/macros/index.html)
So the setup is this: Imagine we draw a very short line vector $\vec{v}$ and let it flow along in a fluid with velocity field $u(\vec{x}, t)$. So the setup is this: Imagine we draw a very short line vector $\vec{v}$ and let it flow along in a fluid with velocity field $u(\vec{x}, t)$.
@ -166,6 +170,18 @@ _{T_{1}}
_{T_{2}}. _{T_{2}}.
$$ $$
Aligning equations:
$$
\begin{align}
\sqrt{37} & = \sqrt{\frac{73^2-1}{12^2}} \\
& = \sqrt{\frac{73^2}{12^2}\cdot\frac{73^2-1}{73^2}} \\
& = \sqrt{\frac{73^2}{12^2}}\sqrt{\frac{73^2-1}{73^2}} \\
& = \frac{73}{12}\sqrt{1 - \frac{1}{73^2}} \\
& \approx \frac{73}{12}\left(1 - \frac{1}{2\cdot73^2}\right)
\end{align}
$$
References: References:
[This is a link to the subtitle heading at the top of the page](#subtitle) [This is a link to the subtitle heading at the top of the page](#subtitle)
@ -365,9 +381,10 @@ function animate() {
</script> </script>
<figure class="multiple"> <figure class="multiple">
<img src="/assets/images/alpha_test/original.jpg" class = "no-dim"> <img src="/assets/blog/alpha_test/original.jpg" class = "no-dim">
<img src="/assets/images/alpha_test/white_subtracted.png" class = "no-dim"> <img src="/assets/blog/alpha_test/white_subtracted.png" class = "no-dim">
<img src="/assets/images/alpha_test/white_subtracted.png" class = "no-dim" style="filter: brightness(2);"> <img src="/assets/blog/alpha_test/white_subtracted.png" class = "no-dim" style="filter: brightness(2);">
<img src="/assets/images/alpha_test/ai_subtracted.png"> <img src="/assets/blog/alpha_test/ai_subtracted.png">
<figcaption> Here are some images, (top left) original, (top right) white subtracted and replaced with alpha, (bottom left) same but brightened, (bottom right) ai background removal tool (loses shadow) </figcaption> <figcaption> Here are some images, (top left) original, (top right) white subtracted and replaced with alpha, (bottom left) same but brightened, (bottom right) ai background removal tool (loses shadow) </figcaption>
</figure> </figure>

View File

@ -36,6 +36,8 @@
--theme-highlight-color-transparent: hsla(338, 75%, 60%, 33%); --theme-highlight-color-transparent: hsla(338, 75%, 60%, 33%);
--theme-subtle-text-color: #606984; --theme-subtle-text-color: #606984;
--night-mode-fade-time: 0.5s;
// constrain width and center // constrain width and center
--body-max-width: 900px; --body-max-width: 900px;
--body-width: min(100vw, 900px); --body-width: min(100vw, 900px);
@ -233,7 +235,7 @@ figure.two-wide {
justify-content: center; justify-content: center;
gap: 1em; gap: 1em;
margin-bottom: 1em; margin-bottom: 1em;
*:not(figcaption) { > *:not(figcaption) {
width: calc(50% - 0.5em); width: calc(50% - 0.5em);
} }
} }
@ -244,7 +246,7 @@ figure.multiple {
justify-content: center; justify-content: center;
gap: 1em; gap: 1em;
margin-bottom: 1em; margin-bottom: 1em;
*:not(figcaption) { > *:not(figcaption) {
width: calc(50% - 0.5em); width: calc(50% - 0.5em);
margin: 0; margin: 0;
padding: 0; padding: 0;
@ -260,7 +262,7 @@ figure.multiple {
margin-bottom: 1em; margin-bottom: 1em;
place-items: center center; place-items: center center;
*:not(figcaption) { > *:not(figcaption) {
margin: 0; margin: 0;
padding: 0; padding: 0;
width: 100%; width: 100%;
@ -362,15 +364,16 @@ body:not(.has-wc) .has-wc {
} }
// Add transitions for things that will be affected by night mode // Add transitions for things that will be affected by night mode
body { body,
transition: background 500ms ease-in-out, color 200ms ease-in-out; a {
transition: background var(--night-mode-fade-time) ease-in-out,
color var(--night-mode-fade-time) ease-in-out;
} }
img {
transition: opacity 500ms ease-in-out; img,
} svg {
svg.invertable, transition: opacity var(--night-mode-fade-time) ease-in-out,
img.invertable { filter var(--night-mode-fade-time) ease-in-out;
transition: filter 500ms ease-in-out;
} }
@mixin night-mode { @mixin night-mode {
@ -391,10 +394,15 @@ img.invertable {
// Two main image classes are "invertable" i.e look good inverted // Two main image classes are "invertable" i.e look good inverted
// and "no-dim" i.e don't get dimmed in night mode // and "no-dim" i.e don't get dimmed in night mode
// All other images get dimmed in night mode // All other images get dimmed in night mode
img:not(.invertable):not(.no-dim) { img:not(.invertable):not(.no-dim):not(.brighten) {
opacity: 0.75; opacity: 0.75;
} }
svg.brighten,
img.brighten {
filter: brightness(2);
}
svg.invertable, svg.invertable,
img.invertable { img.invertable {
opacity: 1; opacity: 1;

View File

@ -43,7 +43,8 @@ summary.cv:before {
left: 1rem; left: 1rem;
transform: rotate(0); transform: rotate(0);
transform-origin: 0.2rem 50%; transform-origin: 0.2rem 50%;
transition: 0.25s transform ease; transition: 0.25s transform ease,
border-color var(--night-mode-fade-time) ease-in-out;
} }
summary li { summary li {
@ -90,6 +91,8 @@ div.details-container {
margin-top: 1em; margin-top: 1em;
border-bottom: var(--theme-subtle-outline) 1px solid; border-bottom: var(--theme-subtle-outline) 1px solid;
transition: border-color var(--night-mode-fade-time) ease-in-out,
opacity var(--night-mode-fade-time) ease-in-out;
h2 { h2 {
margin: 0px; margin: 0px;
} }

View File

@ -34,6 +34,7 @@ header {
border-radius: 50%; border-radius: 50%;
padding: 5px; padding: 5px;
border: 1px solid var(--theme-text-color); border: 1px solid var(--theme-text-color);
transition: border-color var(--night-mode-fade-time) ease-in-out;
} }
h1 { h1 {

View File

@ -3,6 +3,7 @@
} }
.user-toggle { .user-toggle {
display: inline;
padding-top: 0.5rem; padding-top: 0.5rem;
} }
@ -17,7 +18,8 @@
color: var(--theme-text-color); color: var(--theme-text-color);
background: var(--theme-background-color); background: var(--theme-background-color);
border: 1.5px solid var(--theme-text-color); border: 1.5px solid var(--theme-text-color);
transition: background 500ms ease-in-out, color 200ms ease; transition: background var(--night-mode-fade-time) ease-in-out,
color var(--night-mode-fade-time) ease;
} }
.toggle-button__icon { .toggle-button__icon {
@ -27,6 +29,6 @@
flex-shrink: 0; flex-shrink: 0;
margin: 0; margin: 0;
transform: translateY(0px); /* Optical adjustment */ transform: translateY(0px); /* Optical adjustment */
transition: filter 200ms ease-in-out; transition: filter var(--night-mode-fade-time) ease-in-out;
filter: var(--button-icon-filter); filter: var(--button-icon-filter);
} }

View File

@ -1,146 +1,148 @@
h1.thesis-title { h1.thesis-title {
font-size: 3em !important; font-size: 3em !important;
} }
main h1, h2, h3 { main h1,
font-family: "Source Serif Pro", serif; h2,
font-weight: 300; h3 {
font-size: 2.2em !important; font-family: "Source Serif Pro", serif;
font-weight: 300;
font-size: 2.2em !important;
} }
// Make figures looks nice // Make figures looks nice
figure { figure {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
align-items: center; align-items: center;
margin-inline-start: 0em; margin-inline-start: 0em;
margin-inline-end: 0em; margin-inline-end: 0em;
max-width: 900px !important; max-width: 900px !important;
// border-bottom: solid #222 1px; // border-bottom: solid #222 1px;
padding-bottom: 1em; padding-bottom: 1em;
// border-top: solid #222 1px; // border-top: solid #222 1px;
// padding-top: 1em; // padding-top: 1em;
} }
figure > img, figure > svg { figure > img,
// max-width: 90% !important; figure > svg {
margin-bottom: 2em; // max-width: 90% !important;
margin-bottom: 2em;
} }
figcaption { figcaption {
// font-style: italic; // font-style: italic;
// font-size: 0.9em; // font-size: 0.9em;
max-width: 90%; max-width: 90%;
} }
nav.page-table-of-contents > ul > li:first-child { nav.page-table-of-contents > ul > li:first-child {
display: none; display: none;
} }
//For the animation that plays in the nav as you scroll //For the animation that plays in the nav as you scroll
nav.page-table-of-contents { nav.page-table-of-contents {
li li {font-size: 0.9em} li li {
font-size: 0.9em;
}
ul { ul {
padding-inline-start: 6px; padding-inline-start: 6px;
} }
a { a {
transition: all 200ms ease-in-out; transition: all var(--night-mode-fade-time) ease-in-out;
color: #000; color: #000;
font-weight:normal; font-weight: normal;
} }
li.active > a { li.active > a {
color: #000!important; color: #000 !important;
font-weight:bold; font-weight: bold;
} }
} }
// modify the spacing of the various levels // modify the spacing of the various levels
li { li {
margin-bottom: 0.2em; margin-bottom: 0.2em;
} }
main > ul > li { main > ul > li {
margin-top: 1em; margin-top: 1em;
} }
main > ul > ul > li { main > ul > ul > li {
margin-top: 0.5em; margin-top: 0.5em;
} }
// Pull the citations a little closer in to the previous word // Pull the citations a little closer in to the previous word
span.citation { span.citation {
margin-left: -1em; margin-left: -1em;
a { a {
text-decoration: none; text-decoration: none;
color: darkblue; color: darkblue;
} }
} }
// Mess with the formatting of the bibliography // Mess with the formatting of the bibliography
div.csl-entry { div.csl-entry {
margin-bottom: 0.5em; margin-bottom: 0.5em;
} }
div.csl-entry a { div.csl-entry a {
// text-decoration: none; // text-decoration: none;
text-decoration: none; text-decoration: none;
color: darkblue; color: darkblue;
} }
div.csl-entry div { div.csl-entry div {
display: inline; display: inline;
} }
header li { header li {
list-style: none; list-style: none;
a { a {
text-decoration: none; text-decoration: none;
margin-bottom: 0.5em; margin-bottom: 0.5em;
display:block; display: block;
}
}
} }
nav.overall-table-of-contents > ul { nav.overall-table-of-contents > ul {
padding-inline-start: 0px; padding-inline-start: 0px;
> li {
> li { list-style: none;
list-style: none; margin-top: 1em;
margin-top: 1em; }
}
} }
// Page header // Page header
div#page-header { div#page-header {
//make the header sticky, I don't really like how this looks but it's fun to play with //make the header sticky, I don't really like how this looks but it's fun to play with
// position: sticky; // position: sticky;
// top: 0px; // top: 0px;
// background: white; // background: white;
// z-index: 10; // z-index: 10;
// width: 100%; // width: 100%;
p { margin-block-end: 0px;} p {
margin-block-end: 0px;
}
} }
@media only screen and (max-width: $horizontal_breakpoint),
only screen and (max-height: $vertical_breakpoint) {
//make the figures go to 100% and use italics to denote the figure captions
figure > img,
figure > svg {
max-width: 100% !important;
}
@media figcaption {
only screen and (max-width: $horizontal_breakpoint), font-style: italic;
only screen and (max-height: $vertical_breakpoint) width: 100%;
{ }
//make the figures go to 100% and use italics to denote the figure captions
figure > img, figure > svg {
max-width: 100% !important;
}
figcaption {
font-style: italic;
width: 100%;
}
} }

View File

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 155 KiB

View File

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

View File

Before

Width:  |  Height:  |  Size: 664 KiB

After

Width:  |  Height:  |  Size: 664 KiB

View File

@ -0,0 +1,38 @@
#!/usr/bin/env python3
import sys
import numpy as np
from PIL import Image
if len(sys.argv) < 3:
print("Usage: python white_to_alpha.py <input_image_path> <output_image_path>")
sys.exit(1)
input_path, output_path = sys.argv[1], sys.argv[2]
# convert to 64bit floats from 0 - 1
d = np.asarray(Image.open(input_path).convert("RGBA")).astype(np.float64) / 255.0
#decompose channels
# r,g,b,a = d.T
color = d[:, :, :3]
# The amount of white in each pixel
white = np.array([1.,1.,1.])
white_amount = np.min(color, axis = 2)
alpha = 1 - white_amount
premultiplied_new_color = (color - (1 - alpha)[:, :, None] * white[None, None, :])
new_color = premultiplied_new_color / alpha[:, :, None]
original_color = alpha[:,:,None] * new_color + (1 - alpha[:,:,None]) * white
new_RGBA = np.concatenate([new_color, alpha[:,:,None]], axis = 2)
# Premultiplied alpha, but PIL doesn't seem to support it
# new_RGBa = np.concatenate([premultiplied_new_color, alpha[:,:,None]], axis = 2)
# print(np.info(new_RGBA))
img = Image.fromarray((new_RGBA * 255).astype(np.uint8), mode = "RGBA")
img.save(output_path)
print(f"Image saved to {output_path}")

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 740 KiB

View File

@ -19,7 +19,7 @@ if (window.customElements) {
document.querySelector("body").classList.add("has-wc"); document.querySelector("body").classList.add("has-wc");
} }
const modeToggleButton = document.querySelector(".js-mode-toggle"); const modeToggleButtons = document.querySelectorAll(".js-mode-toggle");
const modeStatusElement = document.querySelector(".js-mode-status"); const modeStatusElement = document.querySelector(".js-mode-status");
const toggleSetting = () => { const toggleSetting = () => {
@ -42,9 +42,10 @@ const toggleSetting = () => {
localStorage.setItem(STORAGE_KEY, currentSetting); localStorage.setItem(STORAGE_KEY, currentSetting);
}; };
modeToggleButton.addEventListener("click", (evt) => { modeToggleButtons.forEach((m) => {
evt.preventDefault(); m.addEventListener("click", (evt) => {
evt.preventDefault();
toggleSetting(); toggleSetting();
applySetting(); applySetting();
});
}); });

Binary file not shown.