toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

218
active users

#compression

2 posts2 participants0 posts today

I have to say that iZotope's Nectar remains incredible at cleaning up voices. I had two actors in which there was a very small amount of amplified noise. Small. But, for a guy like me, utterly unacceptable. Nectar's EQ and compression did the best job sweetening the voice and killing this minor "electric" gain than any other compression software I have.

🗜️ #compression #7zip
Pour illustrer les paramètres "compression maximale" dont j'ai parlé là : sebsauvage.net/links/?s0zmfA

Illustration avec un jeu : Loophole.

Décompressé: 3,18 Go
7z "ultra" (-mx=9) : 1,33 Go
7z avec mes réglages : 0,53 Go
(Et zpaq -m4 fait un peu mieux : 0,49 Go)

Bien sûr c'est un exemple qui marche bien, le gain ne sera pas forcément aussi bon sur d'autres données.

sebsauvage.net7-Zip en compression maximale. - Liens en vrac de sebsauvage

So I have hundreds of videos of ~1 minute recorded from my phone ~10 years ago, and they generally don’t have that great compression, nor they are stored in a modern and advanced video format.

For archiving purposes, I want to take advantage of my workstation’s mighty GPU to process them so that the quality is approximately the same, but the file size would be strongly reduced.

Nevertheless, compressing videos is terribly hard, and way more complex than compressing pictures, so I wouldn’t really know how to do this, what format to use, what codec, what bitrate, what parameters to keep an eye on, etc.

I don’t care if the compression takes a lot of time, I just want smaller but good looking videos.

Any tips? (Links to guides and tutorials are ok too)

Also, unfortunately I am forced to use Windows for this (don’t ask me why 🫠), but I know nothing about Windows because I hate it. Practical software suggestions are very much welcome, too!

#ffmpeg#help#askFedi

Fascinating.

tmp $ wc -c < somefile.xopp 
735772
tmp $ file somefile.xopp 
somefile.xopp: gzip compressed data, from Unix, original size modulo 2^32 2086031
tmp $ gunzip < somefile.xopp |file -
/dev/stdin: XML 1.0 document, ASCII text, with very long lines (12483)
tmp $ gunzip < somefile.xopp |wc
    937  204466 2086031
tmp $ gunzip < somefile.xopp |bzip2 -9 |wc -c
619543
tmp $ gunzip < somefile.xopp |bzip3 |wc -c
575115
tmp $ gunzip < somefile.xopp |xz -9e |wc -c
519764
tmp $ gunzip < somefile.xopp |grep -m1 "^.stroke" |cut -c 1-160
<stroke tool="pen" color="#3333ccff" width="2.26 0.72691752 0.73026261 0.73809079 0.74588449 0.74364294 0.72915908 0.71467521 0.71133013 0.70908858 0.7057435 0.
tmp $ gunzip < somefile.xopp |grep -oE "\<[0-9]+\.[0-9]+\>" |wc -l
201692
tmp $ echo "735772/201692" |bc -l
3.64799793744917993772
tmp $ echo "519764/201692" |bc -l
2.57701842413184459472
tmp $ echo "2086031/201692" |bc -l
10.34265612914741288697
tmp $ 

#Compression #XML #Xournal #Xournalpp #Xournal++

So @rl_dane introduced #bzip3 to me to use instead of #bzip2. Let's turn some bz2 files into bz3 to see the difference.

First example: 90k opus files

hey snips wake word dataset. It has ~90k opus files and a tar file of 3.1GB. bzip2 produces the same 3.1GB which is as expected. bzip3 created 3.0GB but used tons of computation power. Not worth the 100MB

Second example: Windows 7 virtual box VM image

Windows7.vdi it's Windows 7 VM image for the "special" days. I think I have to get rid of it. But while it is still there, let's see how each will perform. It is 16GB uncompressed. bzip2 -9 is 7.0GB. bzip3 is 6.3GB but at the expense of like 3x CPU time. Deleting all of them anyway. Down with Windows.

Third example: Pure XML text file

Pure XML file. It's Persian and English characters. Uncompressed is 1.7GB. bzip2 -9 is 276M while bzip3 is 260MB

Final example: Creating a simple bomb

So I did this:

dd if=/dev/zero of=./justzero bs=2G count=6

So now I have a 16GB with only zero bytes. bzip2 -9 is 672KB. bzip3 is 46KB.

Conclusion

Thank you @rl_dane

Real nice thing!

So I was short on storage on my archive drive. I saw librewolf source code. It was tar.gz and ~800MB. I uncompressed it then recompressed it with bzip2 -9 and now it's ~600MB. Generally #bzip2 has better compression for such these data than #gzip.

Edit: But don't do bzip2 -9 all and everywhere. Sometimes -4 is the same as -9 however the latter being tons slower. Also there is pbzip2 for using all your CPU cores.

Muchos libros de texto en epub son bastante pesados, lo que hace imposible enviarlos al kindle desde el email por superar los 50MB.

He encontrado un script de bash que permite una compresión rápida y bastante aceptable. En mi caso ha funcionado perfectamente con calidad y redimensionado del 60%, tal y como se ve en la imagen. Ha pasado un epub de un libro de más de mil páginas con más de 200 imágenes de 74 a 37MB.

Script original ebooks.stackexchange.com/a/720

This is Internet Gold.

> "... it is important to note that the compression algorithm used by lzip only discards the unimportant data. And if it was unimportant before, what makes it so important now? Huh? In fact, many users may find that compressing their entire file system and then restoring it will be a good way to learn what is truly important."

web.archive.org/web/2001060804

web.archive.orgLzip lossy compression

Brand new PEP by @emmatyping to add Zstandard to the standard library:
peps.python.org/pep-0784/

Will it make it in to 3.14 before the feature freeze on 2025-05-06? It'll be close but it's possible!

The PEP also suggests namespacing the other compression libraries lzma, bz2 and zlib, with a 10-year deprecation for the old names.

Join the discussion to give your support, suggestions or feedback:

discuss.python.org/t/pep-784-a

Python Enhancement Proposals (PEPs)PEP 784 – Adding Zstandard to the standard library | peps.python.orgZstandard is a widely adopted, mature, and highly efficient compression standard. This PEP proposes adding a new module to the Python standard library containing a Python wrapper around Meta’s zstd library, the default implementation. Additionally, to a...
#PEP#PEP784#zstd