Joel Snyder<p>New preprint/paper alert: a first effort at studying global properties for natural auditory scenes. This was just accepted at Open Mind, a relatively new diamond open access <a href="https://neuromatch.social/tags/cognitivescience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cognitivescience</span></a> journal.</p><p>This is Maggie McMullin's master's thesis plus some brilliant computational modeling by our colleagues at Johns Hopkins, Rohit and Mounya. Brian Gygi provided tons of code for acoustical analysis and our former post-doc Nate Higgins helped with a lot of matlab coding. Maggie recorded all of our stimuli using a Zoom Q8 recorder, and they are available on OSF. </p><p><a href="https://neuromatch.social/tags/psychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>psychology</span></a> <a href="https://neuromatch.social/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://neuromatch.social/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://neuromatch.social/tags/auditoryscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditoryscience</span></a> <a href="https://neuromatch.social/tags/deeplearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deeplearning</span></a> <a href="https://neuromatch.social/tags/Computational_Neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Computational_Neuroscience</span></a> <a href="https://neuromatch.social/tags/computational" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computational</span></a> #</p><p><a href="https://osf.io/preprints/psyarxiv/r7zx4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">osf.io/preprints/psyarxiv/r7zx</span><span class="invisible">4</span></a></p>