So, anyone who is curious what it means when they read/hear about a website being spiderable. When a search engine crawls your site, it grabs the HTML and starts to make sense of it. Presumably the modern search engine could grab your CSS and render it internally to figure out what goes where to the user, but that’s a lot of work. Grabbing CSS files and rendering them are probably just looking for hidden links (where the link isn’t visible because of CSS code) and other spam-detection, probably not a lot of making sense out of your site structure.
If you aren’t using Lynx, the text based browser to review, you can do it online with the SEO Browser. With modern CSS based layouts, there is no reason to have your content buried, since you can position things where you want. In the old days of tables, many designers put most of the structure of the site in the header, so the content instead was down at the bottom. Early spiders had download caps of each page, so if your content was pushed down, it never got read.
Generally, if your site is big and important, the spiders/engines spend time figuring you out. If you are a small time website operator, why make it difficult? Structure your website so it would make sense in a 1997 Era Browser, with Titles, H1/H2/H3s, and Paragraphs, and the spiders will understand your site. You can always move your pretty little design above it with CSS.