Good news: 44,583 usable responses!

Bad news: Responses alone use 4,592,255 cells on Google Sheets, which is 92% of the 5m cell limit, before I even start writing formulae to count everything!

Good news: I've split it into one Sheet per question, and the data is now processed!

Bad news: This means I will definitely have to (*gulp*) learn new software and maybe even upgrade my computer in order to handle next year's responses, assuming they increase at a similar rate.

Good news: Maybe tomorrow I can start writing up the worldwide report for 2021??

Please note, this is very optimistic, I may not manage it because I have a LOT of fatigue today. On the other hand, it might be a pretty effective way to keep me resting on the sofa. 🤔

I wish I could somehow crowdfund "Cassian learns a new software" - no amount of money will make my brain do the thing, you know?

On that note! Statisticians, can you help out?

I'm going to need some software that can handle a LOT of calculations on a fairly average computer, involving probably 50,000 survey responses. The most important factor is user-friendliness.

Any recommendations?

Money is a secondary consideration to user-friendliness, it doesn't have to be free - if I can't learn how to use it without going to uni then the Gender Census is kaput, y'know? So if it has to cost money to be solo-learnable (a real phrase) then I may have to crowdfund.

Another option might be... to limit the survey to 40,000 responses per year? 🤨 (Very much not what I want to do.)

Follow

@gendercensus Echoing a lot of the replies here. 50,000 unique identifiers, 5M data points is really trivial for most decent computers if managed correctly. I recommend not using excel or libre office as it'll be frustrating. You will want to use postgresql, R, or Julia as recommended. If you literally have zero experience with these languages I recommend taking up people's offers to help.

My experience is in R, and it literally is just a few lines of code to get the data loaded in a format that will allow exploration through many different statistical lenses. R was designed specifically for statistical analysis, and slicing and dicing large data sets (orders of magnitude larger than what you are working with). There are lots of packages that enable pretty sophisticated analyses with a single line of code.

I am also willing to assist. You've done the hard part of creating the survey, marketing it, and getting responses. Let others help <3

· · Web · 1 · 0 · 2

@jqiriazi When you say Excel would be comparatively frustrating - why is that, in your experience?

@gendercensus It'll be fine when doing simple distributions for each individual question as 50,000 rows per sheet is manageable. But, when you want to create statistics across multiple questions to tease out potentially significant and actionable nuances (e.g., x% of age group y represent the majority of answer a in question 3), the spreadsheet format and interface will start to be a barrier, and increase the likelihood of errors.

I have experience working with large complex spreadsheets, and I find them very challenging to debug (I'm currently doing this for a spreadsheet with 96 sheets, 127,141 cells, 56,917 formulas, and 864 charts so I have experience here).

R allows analysis of statistically significant correlations across any number of questions with literally 1 - 5 lines of code. It's much easier to debug. I will also claim that, for statistics, you'll have many more options and much more confidence in the results using R.

@jqiriazi Thanks for the detailed explanation, much appreciated! :)

Sign in to participate in the conversation
Serenity Laboratories

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!