All posts by Mark Kelly

The Myth of Brainstorming

Hi all. So, here’s a fly for your ointment. I refer to Informatics U4O1 key skill 1.

I am reading ‘How To Fly a Horse’* by Kevin Ashton, which is a study of creative thinking. He discusses brainstorming, which is taken for granted as a standard tool for generating creative ideas. In short, he says brainstorming is a waste of time.

He says that brainstorming (invented by Alex Osborn in 1939 and published in his book of 1942) has two assumptions:

1. groups produce more ideas than individuals

Researchers in Minnesota** tested this in the 3M company.
Half the subjects worked in groups of four. The other half worked alone.
In every case, the people working independently produced 30% to 40% more ideas than the groups, and the ideas were independently judged to be of higher quality.
Later research found productivity decreased as group sizes increased.
Conclusion: contrary to expectations, group brainstorming actually inhibits creative thinking.

2. no contribution should be criticised

Ashton says that researchers in Indiana got groups to brainstorm ideas for brand names of products.
Half the groups were told to refrain from criticising ideas. The other half were allowed to offer criticism as they went along.
The groups that did not criticise produced more ideas, but all groups produced the same quantity of good ideas, according to independent judges.
Deferring criticism added only bad ideas. Later research confirmed this.

Conclusion: the best way to create solutions is to work alone and evaluate solutions as they occur. Don’t try to create revolutionary ideas with a committee or a team.

It makes one think…

Regards,
Mark

* https://medium.com/galleys/brainstorming-does-not-work-6ad7b1448dcf

** http://psycnet.apa.org/psycinfo/1963-07944-001

Key Knowledge Kwiz

 
Here’s a classroom competition/game I dreamt up called ‘Key Knowledge Kwiz’. 
 
There are two competing teams – of 1, 2 or 3 kids per team. Or 4. Doesn’t matter.
 
Each team prepares a list of IT theory questions that must have verifiably true or false answers. Maybe 5 questions per team member, plus some spares, just in case (as explained later).
 
(Because kids create their own questions, the teacher doesn’t have to spend time in advance writing them. So the game can be sprung at any time, and the creative responsibility is squarely on the students’ shoulders.)
 
Teams take turns to give the other team a question, e.g. “New York is one of the accepted citation styles.”
 
The other team says whether the answer is true or false (for 1 point). Team members may confer before answering.
 
The answering team may then earn another 2 points for either:
– correctly explaining why the answer is false. (e.g. “The style is ‘Chicago’, not “New York”) 
or
– if the answer was ‘true’, giving a correct and relevant Fun Fact (e.g. “Fun fact: another accepted style is IEEE”).
 
A team is not allowed to pose a question that has either already been asked by the other team, or has already given as a Fun Fact. 
So, using the examples above, neither team could then ask a question about ‘IEEE’ or ‘Chicago’. 
 
(This means that teams need to have some spare questions and/or improvise new ones if they suddenly find some of their questions are suddenly out of play. This adds a little improv spice to the game.)
 
To add difficulty, if a team mistakenly asks a question that does not actually have a true or false answer (e.g. “Is a large company subject to the Privacy Act 1988?”), the other team can challenge the question (for 2 points) and clearly explain why it is invalid (for 4 points), for example, “Objection! If it’s a private company, its size is not relevant. It depends on whether the company turns over more then $3m a year or… etc”
 
(Question-setters will be more careful when framing their questions if they know they will give away twice as many points because they carelessly asked a faulty question.)
 
If there is a dispute over scoring, the audience can be the adjudicators and argue the merits of the question and/or answer until a crowd-sourced decision is agreed upon. 
 
The teacher only needs to make a final arbitration in case the entire crowd is wrong – but the teacher needs to prove that the final judgement is valid.
 
This game may be useful to 
 
(a) fill in time.
(b) help kids focus on very specific KK details.
(c) help them judge and deal with dodgy exam questions in section A where none of the available options is correct.
(d) anticipate common errors and frame questions to exploit them (as exams often do)
(e) stop the teacher talking all the time.
 
I guess this format could also be used for subjects other than IT. 
 
Enjoy
 
Mark

Correctness versus Accuracy 2

Correctness versus Accuracy
7 January 2016

The 2016-2019 VCE Computing study design, U1O3 KK05, wants students to know “factors affecting the integrity of data, such as correctness, reasonableness and accuracy” – which is fine with me…

Until you start to ponder. What is the difference between “correctness” and “accuracy”?

The study design’s glossary does not help.

The Concise Oxford English Dictionary (11th edition) helpfully says:

Correct (adjective) – (1) free from error; true or right. (2) conforming to accepted social standards.
Accurate (adjective) – (1) correct in all details. (2) capable of or successful in reaching the intended target.
I don’t think the COED is very helpful.

So, what are students supposed to know about the difference between the terms in the key knowledge? Here is how I (a former English teacher) see it:

“Correct” (adjective) is a superlative. Something is either correct or it’s not. There are no degrees of correctness. Something is right, or it is not right. You cannot be partially correct.

“Accurate” refers to the degree to which the truth is represented in a statement or calculation.

The ISO (ISO 5725-1) defines ‘accuracy’ as results that are both true (correct) and consistent (without random errors). Accurate observations are close to the true, actual value they claim to represent.

Examples (as I see it):

Correct, but not very accurate: France is a country in the northern hemisphere.
Correct, but more accurate: France is a country in Europe.
Incorrect: France is a country in Africa.
Correct but not very accurate: Pi = 3.14
Correct and more accurate: Pi = 3.14159
Incorrect: Pi = 8.08495869456
My interpretation implies that a statement or observation can be correct but not very accurate; it cannot be accurate but not correct. To be in any way accurate, it must first be correct.

Challenge: Can anyone suggest an example where something is incorrect, but accurate?

Precision

A related term “precision” refers to the consistency and ‘repeatability’ of multiple observations, regardless of how closely they represent the truth. Precise results may be grouped closely together in value, but still be far from the objectively true and accurate value.

(Informatics students might want to interpret this as “the results have a low standard deviation.”)

Bathroom scales that show a person’s weight as being 70kg plus or minus 50 grams every day for a week would be precise because the variation in measurements is small. But if the person actually weighs 90kg, the scales would be inaccurate.
Another view of ‘precision’ in common usage refers to the amount of resolution in a measurement, such as the number of decimal places recorded.

So, bathroom scales might be:

precise but not accurate: the scales consistently show similar values that are far from the true weight.
accurate but not precise: the readings are closer to 90kg but no two readings are very much the same.
both precise and accurate: the readings are always very close to 90kg. Such accurate and precise readings may also be classed as “valid”.
neither precise nor accurate: the readings vary widely, and none of them is close to 90kg.
Although casual use often uses ‘accuracy’ and ‘precision’ interchangeably, they are very different in strict scientific usage. It’s like the way VCE IT uses the terms ‘data’ and ‘information’, or ‘accessibility’ in a strict sense.

Further reading:
https://en.wikipedia.org/wiki/Accuracy_and_precision

Interbits

Beware of the ‘good old days’ stories

http://imgs.xkcd.com/comics/old_days.png

“Bloody hell, granny.
I lost marks on my SD exam because you said that because of the Australian federal intersex legislation of 2013, bits now have the right to be identified as ‘interbit’ and not rigidly categorised as simply 1 or 0.

On the positive side, interbits have led me to a breakthrough in quantum computing, so I now own Google, Apple, Microsoft, Intel, IBM, Telstra, Facebook … [phone rings] … and Tasmania, apparently. Huh. Can’t even remember bidding for that.”

Apple does something right? Wow!

Usually – to save time – I immediately assume every design move by Apple is stupid, dangerous, or likely to damage the ozone layer.

I read today that they were ditching the function keys from their new range of laptops.

My knee-jerk greybeard instincts made me joyously utter, “You dumb whippersnappers…”

Then I realised they were completely right.

That means either I’m becoming a better person, or I need to change my medication.

Mark

P.S. I’ve still not completely forgiven Apple for removing floppy disk drives about 15 years ago without asking me first.

And – deep down – I still believe that it does a POWER OF GOOD to modern youth today to have to memorise the actions of 12 function keys multiplied by various combinations of SHIFT, CTRL, ALT modifier keys – for each different application.

I mean to say. I had to do it, and it didn’t do ME any harm.

Sorry nurse. I didn’t mean to hit you with my cane.

I’ll be good now.

Is there jelly for dessert tonight?

I like jelly.

A sad tale of creeping data rot and general obsolescence

Hi, technological die-hards,

We all know of the dangers of digital data that becomes inaccessible due to damage (e.g. magnetic tapes and disks that decay or fade).

And the problems of data being orphaned by hardware changes – e.g. have you tried reading a 5.25″ floppy disk or a vinyl LP recently?

Or failures due to software developments…

I have been a happy user of Microsoft’s Image Composer (MIC) since it first appeared on CD 2 of Frontpage circa 1998. Whenever I had an image that needed a quick and dirty crop, resize, background removal or merging with another image I’d fire up MIC, and I’ve been delighted that it has happily worked for the past 18 years – which is 126 in dog or software years.

Since I moved to Win10, however, I’ve seen MIC start to stumble and groan when asked to do its daily deeds. It won’t any longer let me drop images onto its interface to load them; it needs to be led to network shares like a blind pony; it complains loudly about modern whippersnapper file formats that it doesn’t understand because their trousers are too low, or they have tattoos.

I have managed to take MIC out walking each day for 18 years with ever-increasing doses of ‘Compatibility Mode’ settings, and gentle handling (e.g. opening files with File > Open rather than drag and drop) but I suspect that the old dog may not survive many more Windows updates without some sort of heroic intervention, such as OS emulation. But I’d hate to see him living on life support.

To that end, I am sad to say that I have recently decided to convert all my *.mic (Image Composer’s native format) files to another format.

But not even Photoshop CS can open “the wrong kind of document” as it cruelly refers to *.mic files.

One experiment later – let the crowds rejoice! – if I use MIC to save a multi-layer (‘sprite’ in MIC terms) *.mic image to Photoshop PSD 3.0 format, Adobe Photoshop CS can read and respect the MIC ‘sprites’ as Photoshop layers! Woohoo! I won’t have to flatten the multi-sprite MIC files into one-layer images.

So, while MIC still has a pulse, I will pluck up her refugee *.mic children and whisk them to a new format that may last a little longer, so they don’t become degraded or completely unreadable.

But in the end, even Photoshop will become “What is this ‘Photoshop’ stuff, grandpa? Is that like ‘Kodak’ that we learnt about in History?” – and you will sigh and struggle to prevent the inevitable lecture about the ‘good old days’.

You may want to delve into your archives (remember to check the offline media too – disks, tapes, SIM and SD cards, old flash drives) and find those file formats that are surviving on life support and desperately needing a transplant.
Are files stored with old DRM, discontinued encryption methods, failed proprietary file formats, or require obsolete hardware drives or ports, like the original, hugely-popular, ground-breaking Sony Memory Stick?

Now I just need to do something about those Wordstar files I need to read from those 8″ floppy disks. I seem to remember that one of them proves that I invented the TCP/IP protocols…

Mark

Visit obsoletemedia.org for a disturbing view of your hardware’s future…

Further reading about media format obsolescence.

The new Informatics exam format has been announced

Exam specs 2016

http://www.vcaa.vic.edu.au/Documents/exams/technology/it-informatics-specs-samp.pdf

Section A will consist of 20 multiple-choice questions worth 1 mark each and will be worth a total of 20 marks.
Section B will consist of short-answer questions and will be worth a total of 30 marks.
Section C will consist of short-answer and extended-answer questions, including questions with multiple parts. Questions will be based on a case study. Materials relating to the case study for Section C will be presented in a detachable insert in the centrefold. Section C will be worth a total
of 50 marks.

Logical fallacies and failures

When evaluating secondary sources, there are some logical fallacies and bad practices you might want to be aware of… 

– Causation – If A and B are correlated it does not mean A causes B. 
Or B might cause A. 
Or A might cause B while B also causes A in a feedback loop. 
Or both A and B might be caused by unknown factor C.
Or similarities between the trends in A and B may be purely coincidental.
 
– Just because A happens before B does not mean A causes B. This is a particularly dangerous fallacy, e.g. immunization causes autism; a cancer sufferer drank green tea and had complete remission, therefore green tea cure cancer.
 
– Confusion of cause and effect – “He’s unmarried because he’s so angry” vs “He’s so angry because he’s unmarried”.

 
– A series of random events (e.g. coin tosses) that show a pattern are not actually causally linked.
 
– Cherry picking – selecting only the data that supports a hypothesis.
 
– Ignoring extraneous variables – Assuming that only one possible cause exists when in fact others are possible.
 
– Ad hominem – attacking the person on irrelevant personal grounds rather than attacking his or her argument. e.g. “Could you believe a tattooed and pierced freak like him could ever be a serious member of parliament?”
 
– Using loaded questions – questions that have inbuilt assumptions, e.g. “Considering the growing rates of autism, would you have your child immunized?”
 
– Using leading questions – that contain information that suggests the desired answer. e.g. “As a responsible parent, would you risk immunizing your child?”
 
– Double-barrelled questions – which lead to ambiguous results by combining two separate variables, e.g. “Do you agree with child immunization and circumcision? Yes/No.”
 
– Questions that force some people into giving inaccurate answers, e.g. “Do you go to church daily / weekly / occasionally.” The options are incomplete.
 
– Vague, ambiguous or subjective questions – that may be interpreted differently by different people. e.g. “Are men superior to women?” (in what respects?) or “Do you swear often?” (what does “often” mean?)
 
– Begging the question – assuming the truth of an issue that is in question. e.g. “But he’s a gentle, law-abiding man! How could he possibly be guilty of this horrific massacre?”  (It annoys me mightily when people say this when they actually mean “raises the question”. e.g. “He said he was a lazy man, which begs [raises] the question of how he achieved so much.”)
 
– Over-generalisation – extrapolating from small number of observations to formulate a rule, e.g. “The Collingwood fans I saw at the pub after the match were rowdy, annoying, drunken idiots. I hope my daughter never has to live in Collingwood.”
 
– Misusing anecdotal evidence – single or personal examples do not over-ride large masses of contrary evidence. e.g. “Feisty Edith is still full of vim and vigour at the ripe old age of 100 and she still smokes a cigar every day! Smoking obviously can’t be too dangerous, can it? Ha ha ha. Over to you for tips on how to cook kale and quinoa pizzas, Sandra. Ha ha ha.”
 
– Appeal to emotion – side-stepping logic by using emotionally-charged claims. e.g. “You shouldn’t immunize your child. Think of the tragic and devastating life-long effects this can have.”
 
– Appeal to pity or fear – e.g. “An immunized child, stricken with autism, doomed to live a life of pain and suffering and <insert horrors here>. How can you let this happen?”
 
– Unproven compromises – a weak attempt to play it safe by pleasing everyone a little bit. e.g. “Some people believe that immunizations cause autism. Some believe they don’t. Let’s just say than that some immunizations cause autism, OK?”
 
– Burden of proof – avoiding giving substantiating evidence by challenging the opponent to prove otherwise. e.g. “I am sure immunization causes autism. Show me one piece of evidence that I’m wrong.”
 
– Appeal to ignorance – assuming that something must be true (or false) just because it has not 
been proven false (or true). e.g. “Ghosts exist. No-one has proven they don’t!” (this is similar to  ‘burden of proof’ above)
 
– Wrongful dismissal – reasoning that because an argument was poorly presented or had spelling mistakes, then it must be wrong. This is common amongst Spelling Stormtroopers on forums, e.g. “You can’t even spell ‘their’ properly. How can we believe you when you say rubbish like ‘Jon Snow is going to come back from the dead in season 6?’ ROFL. Buy a dictionary, Hodor.”
  
– Appeal to tradition – arguing that an idea must be wrong because “We’ve never been done it that way” . Or saying that an idea is right because it’s commonly practised. “Hundreds of lobotomies are performed every day, Mrs Smith. I really don’t know what you’re so worried about.”
 
– Sheer ignorance – belief that a hypothesis must be wrong because one is too ignorant to understand it. “How can disease be caused by tiny germ things? Things can’t be so small you can’t see them? It’s ridiculous.” 
 
– Appeal to vanity – e.g. “Any intelligent reader will know that…”
 
– Reductio ad absurdum – exaggerating an opponent’s argument to ridiculous extremes so it can be mocked. e.g. “These anti-immunization people say that the MMR vaccine causes autism. Next they’ll be saying that aspirin causes your legs to fall off! How can you trust anything they say?”
 
– Appeal to novelty – assuming that something is better because it’s new. e.g. 
“Windows Vista / 4K video / Microsoft Clippy / internet-connected fridges / 
voice recognition replacing the keyboard / the hydrogen-filled airship / The Atkins Diet” is new! Buy it now.”
 
 
Thanks to https://yourlogicalfallacyis.com/poster for some tasty input.
 
By the way. The immunization/autism example I’ve used is just an example.
 
But if you’re morally outraged, please read this.
Then accept that you are an easily-swayed middle-class idiot.
Do not email me about this subject.
I have already validated and rejected your invalid input.
 
 
 

What is ‘Informatics’ ?

The 2016-2019 study design has replaced the previous subjects ‘Information Processing and Management’ and ‘IT Applications’ with an odd new beastie, Informatics.

Rather oddly, the study design declines to actually define what the word means. So I asked Mr Google…

Informatics is the science of computer information systems. As an academic field it involves the practice of information processing, and the engineering of information systems.

The field considers the interaction between humans and information alongside the construction of interfaces organisation, technology and system. It also develops its own conceptual and theoretical foundations and utilizes foundations developed in other fields.

As such, the field of informatics has great breadth and encompasses many individual specializations, including disciplines of computer science, information system, information technology and statistics. Since the advent of computers, individuals and organizations increasingly process information digitally.

This has led to the study of informatics with computational, mathematical, biological, cognitive and social aspects, including study of the social impact of information technologies.”