toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

291
active users

#dnns

0 posts0 participants0 posts today
Nom__XD, 飲ん,诺穆 (They/Them)🐈‍⬛<p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> <a href="https://mastodon.social/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> <a href="https://mastodon.social/tags/gemini" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gemini</span></a> <a href="https://mastodon.social/tags/grok" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>grok</span></a> <a href="https://mastodon.social/tags/deepseek" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deepseek</span></a> <a href="https://mastodon.social/tags/alexa" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>alexa</span></a> <a href="https://mastodon.social/tags/copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>copilot</span></a> <a href="https://mastodon.social/tags/vibecoding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vibecoding</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/behaviortrees" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>behaviortrees</span></a> <a href="https://mastodon.social/tags/cnns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cnns</span></a> <a href="https://mastodon.social/tags/transfomers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>transfomers</span></a> <a href="https://mastodon.social/tags/dnns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dnns</span></a> <a href="https://mastodon.social/tags/supervisedlearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>supervisedlearning</span></a> <a href="https://mastodon.social/tags/unsupervisedlearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>unsupervisedlearning</span></a> <a href="https://mastodon.social/tags/reinforcedlearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reinforcedlearning</span></a> <a href="https://mastodon.social/tags/aiart" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiart</span></a> <a href="https://mastodon.social/tags/ArtificialInteligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialInteligence</span></a> <a href="https://mastodon.social/tags/nlp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlp</span></a> 🖕🐾</p>
Annual Computer Security Applications Conference<p>Ending the session, we saw Park et al.'s "DeepTaster: Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in Deep Neural Networks" demonstrating the detection of unlawful dataset use in <a href="https://infosec.exchange/tags/DNNs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DNNs</span></a>. (<a href="https://www.acsac.org/2023/program/final/s321.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">acsac.org/2023/program/final/s</span><span class="invisible">321.html</span></a>) 4/4<br><a href="https://infosec.exchange/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://infosec.exchange/tags/DataSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataSecurity</span></a></p>
Christos Argyropoulos MD, PhD<p>Compressors such as <a href="https://mstdn.science/tags/gzip" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gzip</span></a> + <a href="https://mstdn.science/tags/kNN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kNN</span></a> (k-nearest-neighbor i.e. your grandparents' <a href="https://mstdn.science/tags/classifier" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>classifier</span></a>) beats the living daylights of Deep neural networks (<a href="https://mstdn.science/tags/DNNs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DNNs</span></a>) in sentence classification.<br>H/t <span class="h-card"><a href="https://lingo.lol/@lgessler" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>lgessler</span></a></span> </p><p>Without any training parameters, this non-parametric, easy and lightweight (no <a href="https://mstdn.science/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a>) method achieves results that are competitive with non-pretrained deep learning methods on six in-distribution datasets.It even outperforms BERT on all five OOD datasets.</p><p><a href="https://mstdn.science/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mstdn.science/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> </p><p><a href="https://aclanthology.org/2023.findings-acl.426/" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aclanthology.org/2023.findings</span><span class="invisible">-acl.426/</span></a></p>
Nick Byrd, Ph.D.<p>Why <a href="https://nerdculture.de/tags/DeepNeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepNeuralNetworks</span></a> need <a href="https://nerdculture.de/tags/Logic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Logic</span></a>:</p><p>Nick Shea (<a href="https://nerdculture.de/tags/UCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UCL</span></a>/#Oxford) suggests</p><p>(1) Generating novel stuff (e.g., <a href="https://nerdculture.de/tags/Dalle" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Dalle</span></a>'s art, <a href="https://nerdculture.de/tags/GPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT</span></a>'s writing) is cool, but slow and inconsistent.</p><p>(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., <a href="https://nerdculture.de/tags/modusPonens" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>modusPonens</span></a> works the same way every time).</p><p>So (3) by <a href="https://nerdculture.de/tags/learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>learning</span></a> Logic, <a href="https://nerdculture.de/tags/DNNs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DNNs</span></a> would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).</p><p><a href="https://nerdculture.de/tags/CompSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompSci</span></a> <a href="https://nerdculture.de/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>