Rachel<p><span>ok looking a bit closer I think I see how it works! But I have some thoughts and a decision/question...<br><br>Each ceph pool gets automatically split into some number of PGs, and each PG gets split up by the crush rules.<br><br>So the PGs of my rbd pool all get split up between three hosts, but </span><i>which</i><span> three is chosen such that the total data usage is spread evenly across nodes/disks based on the weight.<br><br>For some reason the big hdd ec pool only has 1 PG so far so that is just using 6 of the drives, as it adds PGs it should spread out to the 8 drives just fine<br><br>But now I am thinking:<br><br>Do I continue with OSD failure domain, or do I switch to 2+1 EC with 4 hosts for this pool?<br><br>Basically everyone suggests not using OSD failure domain, but the mgr/etc data is replicated on the SSDs and with 8 drives it could re-balance (it will be a LONG time till I fill this, or even get close to 50% used)<br><br>Meanwhile with 3+1 and node failure domain I have the same capacity.<br><br></span><a href="https://hachyderm.io/@willglynn" class="u-url mention" rel="nofollow noopener" target="_blank">@willglynn@hachyderm.io</a> any thoughts? It could be a while until I can add more nodes/disks so no suggestions of filling a rack with 20 more servers ;) <a href="https://transitory.social/tags/Ceph" rel="nofollow noopener" target="_blank">#Ceph</a> <a href="https://transitory.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> <a href="https://transitory.social/tags/Homelab" rel="nofollow noopener" target="_blank">#Homelab</a></p>