I see long discussions on John Searl's Chinese Room thought experiement, and I still fail to see why anyone should spend time on it. The experiment tells us nothing.
In essence, the experiment has us force a human to perform a menial, mindless role in a larger machine. The role of identifying symbols, processing them according to rules and transferring the results to the outside world is something we already know a computer can do--mindlessly. So why would we expect a human performing those actions to have to understand anything in order to play that role?
Searle's experiment purports to comment on the Turing test and whether any machine can exhibit understanding, but all he's illustrated is that you can subdivide a large intelligent system into smaller ones that perform according to fixed rules and have no understanding of what they manipulate. Any surprise there? Having split the apparently intelligent room into a thinking part (the reference/algorithm) and a purely mechanical communication part (the man in the room), Searle focusses on the tediously boring and obviously unintelligent part. The man is a distraction: "Ooh look, there's a man in there, but he doesn't understand any of the symbols!"
I wish Searle had spent more of his time on the thinking part rather than misleading people with pointless arguments focussed on mechanical processes.