Monday, December 26, 2011

Bridging form and function


The old Poole lifting bridge was built in the 1920s. It is a lifting bridge comprising two sections that split apart in the centre.



A new bridge called the 'Twin Sails' is due to be completed in February. This is designed somewhat differently, comprising two triangles, each of sufficient length to span the gap they are to cover.

The old bridge can be modelled as two rectangles, meeting in the centre of the span to create one large rectangle, as in figure 1:

Figure 1: Old lifting bridge (plan view).

The new bridge can be modelled as two triangles that each span the gap, and tesselate together so as to form a large rectangle, as in figure 2:





Figure 2: 'Twin Sails' (plan view).


In both cases the two sections pivot at their ends to allow the bridge to lift. The closer the centre of mass of each section is to the pivoting ends, the less strain will be imposed on the motors. The centre of mass of each rectangular section of the old bridge is just halfway along the section, i.e., one quarter (0.25) of the total span of the bridge.

In the case of the triangular sections, for a triangle of uniform thickness, the centre of mass is at a distance from the pivot (i.e. the base of the triangle), such that the surface area of triangle above it is half the total surface area:

Figure 3: Triangle, showing line on which centre of mass rests.

The area of a triangle = (base x height)/2.

If the small triangle in figure 3 is half the size of the total, then (b(1)h(1))/2 = (b(2)h(2))/2.
The height to base length ratio is the same for each triangle, therefore: b(1)/h(1) = b(2)/h(2), therefore, b(1) = ((b(2)h(1))/h(2)

By substitution:

(b(2)h(1)^2)/h(2) = b(2)h(2)^2

From this, H (the distance of the centre of mass from the pivot point) = h(1) - h(2) = h(1)(1-(2^(-1/2)), roughly = h(1) x 0.292.

So, the centre of mass of the triangle is slightly further from the pivot point than is the case for the more conventional design spanning the same distance, and the Twin Sails design is therefore less efficient than its conventional predecessor. Essentially a triumph of form over function.

Wednesday, April 13, 2011

Uncertain times Time is perplexing, or rather, our perception of it is. Why do we remember the past but not the future? Why do we travel inexorably forward in time? Historically physicists from Newton's time onwards have considered the world to be essentially deterministic, i.e. given a sufficiently precise knowledge of the current state of things, and a sufficient grasp of the laws of physics, one could in principle predict the future unfolding of reality indefinitely far into the future with total accuracy. Quantum theory introduces Heisenberg's Uncertainty Principle, which imposes a fundamental limit on the precision with which we can simultaneously gauge the momentum and position of a particle. Consequently the future of things ceases to be predictable (although one can still make probabilistic judgements concerning likely developments). Supposing the universe was deterministic; what would be the implications for our perception of time? Let's consider a computer - a device to which we can attribute an arbitrarily high amount of processing power. Assuming that this device is capable of perceiving the world around it to an arbitrarily high level of precision, it will also be capable of determining the future events that derive from the present state of things. Having a conscious perception of these future events, and a certainty as to their accuracy, is surely not practically distinguishable from having a recollection of past events. In summary then, in a deterministic universe it is possible to create an entity which can effectively 'remember' both past and future events, and therefore has no perception of the 'arrow of time'. I therefore suggest that the Arrow of Time is a consequence of the Uncertainty Principle, and without it, in principle the sense of ‘passage of time’, the past, the present and the future, would have no meaning.

Friday, March 18, 2011

Traffic Flows (2)

Referring back to 15/03/2009 entry 'Traffic Flows', I've been considering the consequences of merging lanes of traffic, by analogy with the Bernoulli Equation.

The Bernoulli Equation in fluid dynamics, as it relates to the flow of an incompressible fluid, can be expressed as:


Where v is the fluid flow speed (in a straight line), g is the acceleration due to gravity, z is the elevation of the point under consideration above a reference plane, with the z direction opposing g, p1 is the pressure of the fluid at the point under consideration and p2 is the density of the fluid.

Taking this equation to be analogous to traffic flows, both g and z cease to relate to any meaningful physical concept, and the equation can be simplified by discounting them.


Continuing to refine the analogy, the density, p2 of the fluid may be considered to correspond to the number of vehicles per metre of lane (though perhaps we should be employing yards as our unit of measure here, in deference to the requirements of the Road Traffic Act). The Bernoulli equation holds true for an incompressible fluid, and avoidance of the 'compression' of traffic (thereby increasing its density) is precisely what we seek to achieve.

p1, the pressure of the fluid, is a slightly more nebulous concept in traffic flow terms, corresponding essentially to the extent to which motorists are compelled by obstruction to travel more slowly than the nominal speed-limit for the road allows.

Defining pressure in terms of vehicles travelling along a road is challenging, but I propose to use the 'mean free path' (MFP) concept from Collision Theory as the basis for a model.

The MFP for a gas can be defined as follows:

where kB is the Boltzmann constant, T is temperature, p is pressure, and d is the diameter of the gas particles.


On the (hopeful) assumption that vehicles on a road do not actually collide, the MFP may nonetheless be considered to correspond to a measure of the freedom of manoeuvre enjoyed by the individual motorist (which is of course dependent on the weight of traffic).

d for these purposes may be defined as the width of a lane (as multiple vehicles cannot occupy a single lane at the same point along the road. Since the lane width is a constant we shall ignore it. kB, pi and root2 are constants of proportionality, which we may ignore. and the temperature features in this equation because it affects the velocity of the individual particles. Since temperature does not affect driving speed we shall ignore it too.

Effectively then, the pressure experienced by the traffic is inversely proportional to the MFP, which itself is proportional to the weight of traffic, or the length of lane per car.


p1/p2 thus becomes (length of lane per car)/(cars per length of lane), which thus becomes (length of lane per car)^2


Overall then, 1/2(v^2) + (length of lane per car)^2 = constant.


We have established that in order to prevent congestion we must prevent a reduction in the length of lane per car.


Rearranging the above equation thus gives:


length of lane per car = (constant - 1/2(v^2))^(1/2).


Where two lanes merge to one, the number of cars per lane doubles (obviously). To compensate for this effect, (and thus prevent an increase in traffic density), the velocity of the traffic must increase, so as to effectively double the length of lane per car (as it would have been had the lanes not merged)


Let velocity of the traffic in the two lanes = v(1), velocity of the traffic merged to one lane = v(2) and length of lane per car = L,


L = (constant - 1/2(v(1)^2))^(1/2)


2L = (constant - 1/2(v(2)^2))^(1/2)


This rearranges such that v(2) can be defined as (4v(1)^2 - 3(constant))^(1/2). The value of the constant can presumably be determined by observation, but it is necessarily not negative. Note also that 3(constant) cannot exceed 4v(1)^2 without resort to imaginary numbers.


This equation tends ever closer towards v(2) = 2v(1) as v(1) increases. To a first approximation then, v(2) = 2v(1) may be used as a guide in determining the speed limit change required to prevent traffic congestion at a point where lanes merge, i.e., a dual carriageway with a speed limit of 60 mph should merge to a single lane with a speed limit immediately beyond the merge-point, of 120 mph.



The MFP is considered on the hopeful assumption that vehicles don't actually collide...
Musings on collective decision-making.

The world is a complex place- a complex system in fact. To put it another way, reality unfolds according to the interplay of so many factors that it is physically impossible to model it accurately. This is why weather prediction tends not to be meaningful more than a few hours in advance.

Most of the high level issues with which a political or business leader must contend exhibit similar complexity; every possible course of action yields outcomes that cannot be predicted with certainty, due to the complex interplay of causal relationships that are influenced by said action. A wise decision-maker therefore recognises that most important questions do not have a 'right' or 'wrong' answer. This realisation does nothing to inspire confidence in one's ability to make decisions in the first place.

By contrast, consider an individual who, either through an unwillingness to take account of all the relevant information, or a lack of capacity to do so, fails to perceive the inherent complexity of a given problem, and the many caveats that must accompany any possible solution to it. To this individual, who, willfully or otherwise, models complex problems as simple ones, a 'right' or 'wrong' answer can readily be perceived to most questions. Their decisions will fail to account for all relevant factors of course, making them decidedly unreliable, but their view of the world will not suffer the uncertainty that plagues the wise decision-maker, and their confidence will not thus be undermined.

I therefore propose that strong opinions on complex issues are the preserve of those with poor reasoning skills, while good reasoning skills yield a recognition of uncertainty and a corresponding tendency to indecisiveness. If we assume that this relationship is approximately linear, then

OD = K

where O is a measure of how opinionated an individual is, D a measure of the quality of their decision making ability (essentially how amenable to reason they are), and K is a constant.

Assuming this relation to hold true across humanity in general, the overall effect one might expect to observe is a society disproportionately influenced by the most opinionated, and thus least rational. Said society would therefore behave in a manner suggestive of a collective reasoning ability well below that of the average of its citizens, which I would suggest is consistent with observation.

Monday, February 14, 2011

CAMRA London Pub Density Analysis.
The 'CAMRA Good Beer Guide' is an annual publication, containing ca. 4,500 brief reviews of public houses in the UK serving at least one ale. (A Public House that doesn't serve at least one ale does not warrant consideration in the eyes of the 'Campaign for Real Ale', or indeed my own). The 2009 guide has helpfully divided London and its surrounds by postcode district, which permits analysis of the city and its facilities with particular focus on the needs of the middle-aged alcoholic. Plots of the number of CAMRA-approved drinking establishments by postcode district for the NW, N, E, EC, WC, W, SW and SE postcode areas are shown in figures 1-7 (the districts on each graph are arranged from left to right, in order of increasing distance from the city centre).

Figure 1-Number of CAMRA-approved pubs by EC postcode district
Figure 2-Number of CAMRA-approved pubs by NW postcode district

Figure 3-Number of CAMRA-approved pubs by N postcode district

Figure 4-Number of CAMRA-approved pubs by W/WC postcode district

Figure 5-Number of CAMRA-approved pubs by E postcode district

Figure 6-Number of CAMRA-approved pubs by SW postcode district

Figure 7-Number of CAMRA-approved pubs by SE postcode district
Several observations can be made:
North and East London exhibit a relative paucity of ale-houses, compared to the South and West.
The WC and W areas exhibit the highest mean density of pubs per district (4.7 pubs/district), while the SE and SW areas exhibit the largest numbers of pubs overall, although account must be taken of the considerable geographic expanse of the SE and SW areas (for example there are 37 pubs in the South-East of Greater London, but this area encompasses the entirety of Orpington, Bromley and Croydon, and thus does not equate to a particularly high pub-density).
The most pub-dense inner London district is SW1, which contains 13 CAMRA-approved establishments.
Consideration of the geographic arrangement of the pub-dense districts suggests that the ideal base from which an ale-appreciator should operate is in or near SW4; this district contains 4 CAMRA-approved establishments, is less than a mile from SW1, and comparatively close to the pub-dense W postcode districts.

Friday, January 07, 2011

Force Fields

The energy fields featured as a plot device on shows such as Star Trek would certainly be of considerable value to the military- the premise appears to involve some form of invisible barrier which performs the function of conventional armour, though more effectively; projectiles and energy weapons either bounce off of or are absorbed by the shield. The same principle considered in the previous 'Light Sabre' post may be used to form an energetic barrier of sorts- simply projecting a series of points where laser light is focused could form a 'wall' of intense heat sufficient to destroy incoming projectiles that attempted to pass through it.

The science fiction force field defences were also effective against energy weapons, and is unclear how this could be achieved, but in any case, attempting to simultaneously project some form of 'blocking' energy around an entire spacecraft simultaneously will inevitably require considerably more power than would generating an energy beam at a single point with power equal to or greater than the 'shield energy' at the same point. Assuming both weapon and shield are both provided by the same power plant, such a spacecraft will not be able to withstand an attack by its own guns (it would be 'unbalanced' in old naval parlance).

Point defence is much more efficient- energy can be focused and targeted rapidly by computer control, in order to intercept incoming projectiles at specific points. The previously referenced developments in 3-d laser projection illustrate the flexibility achievable by such a system.
Light Sabres

The 'light sabre' of the 'Star Wars' franchise is something of an oddity. Essentially a glowing sword blade comprising some incorporeal material that allows the user to cut or burn through opponents.

It has often been pointed out that a laser beam cannot be used to create this effect- a laser shone from the handle cannot be compelled to 'stop' at a fixed distance to create a blade of finite length. It is also frequently suggested that the path traced out by a laser beam doesn't glow in the way usually portrayed. This isn't quite true however- in a vacuum certainly, the path of a laser beam is invisible, but a sufficiently energetic laser will ionise air to create a convincingly visible beam in the atmosphere (http://www.youtube.com/watch?v=t65_JJrLFZ8&NR=1).

Recent advances in computer controlled lasers have demonstrated the ability to project images in 3-dimensional space (http://www.aist.go.jp/aist_e/latest_research/2006/20060210/20060210.html). The technique uses a lense to focus a laser beam onto a chosen point in space, creating a plasma 'flashpoint'. Since the focal point of the laser can be moved rapidly, it is possible to create a large number of such flashpoints almost simultaneously.

An effect comparable to the 'light sabre' could thus be created relatively simply by generating a line of flashpoints by rapidly cycling the focal length of a laser source up and down the intended length of the 'blade'. See figure 1:





Figure 1: Light sabre projection

Creating the nastier destructive effects of the sword is then a matter of increasing the laser's power output. It should be noted that one could not block the 'blade' with another blade- they'd pass straight through each other. Also achieving the required energy and power density in a portable device would be a tall order.

Thursday, January 06, 2011

Electric Armour

High tensile strength materials are extremely useful. As well as finding applications in ropes and cables they combine with high compressive strength materials to create composites with impressive mechanical properties. (Reinforced concrete for example, combines the compressive strength of concrete with the tensile strength of steel). In principle the highest achievable tensile strength for a polymer strand would require it to comprise a single, giant macromolecule spanning its entire length, but this is a rather difficult proposition. An alternative might be to compel much shorter polymer chains to connect themselves together electrostatically.

The attractive force between two oppositely charged plates in a parallel plate capacitor is given by F(att) = (EoAV^2)/(2d^2), where 'Eo' is the electric permittivity of free space, 'A' is the area of the plate, 'V' is the potential difference between the plates and 'd' is the separation distance between plates. In principle then, the tensile strength of the two plates (in the sense of the force required to separate them), is limited only by the magnitude of the voltage that can be applied across them.

The force of attraction is also inversely proportional to the square of the distance between plates of course, but capacitors can also be arranged in series, as in figure 1:

Figure 1: Parallel plate capacitors in series

In figure 1, the total attractive force between each set of parallel plates is again given by (EoAV^2)/(2d^2), where 'A' is the area of a plate, but 'd' is the sum of the distances between all the plates in the series. If the voltage 'V' is sufficiently large however, the tensile strength of this 'string' of capacitors is limited only by the strength of the connecting wires.

Returning to the matter of the tensile strength of polymer strands, it is proposed that a 'string' of capacitors be created on the molecular scale. For connecting wires, we may substitute an unsaturated polymer chain constituting a conjugated pi-system, see figure 2 (this is capable of conducting electron density along its length).


Figure 2: Conjugated pi-system- conducting polymer.

The construction of a 'molecular capacitor' is perhaps a little more challenging. A capacitor is fundamentally an arrangement of two opposite charges separated by a dielectric, such that current may not flow between them. In principle then, a conducting molecular chain terminating in a non-conducting moiety could form the building blocks of such a device. In order to make a capacitor, at least two of these moieties must be positioned in close proximity, and unfortunately, molecules cannot generally be placed wherever we see fit.

We might compel long polymer chains to align themselves roughly parallel to one another (this is essentially how liquid crystals behave). We might also be able to persuade them to align end-to-end in the desired manner, by making the individual molecules zwitterionic.

A (relatively) simple example of such a molecule might be as shown in figure 3:

Figure 3: Zwitterionic molecular capacitor component

The innate charges on each end of the polymer (the ethylene functionality in the middle may be repeated to extend the chain) should serve to orientate the molecules end to end, and the addition of a high voltage potential difference along the length of the chain should serve to strengthen these electrostatic interactions to create a material with immense tensile strength, see figure 4:


Figure 4: Molecular capacitor series.

Assuming the molecular chain continues to behave like a capacitor series, the electrostatic interactions between the individual molecules may actually exceed the strength of the bonds within the molecules, given a sufficiently substantial applied voltage- a step towards the 'force-fields' that grace so many of the more dubious works of science fiction. (The latter typically view creating a force field as 'build a wall, then take away all the atoms'...)

Monday, December 20, 2010

Electromagnetic Water Softening

I am perplexed by the proliferation of commercial 'magnetic water conditioning' products. These essentially comprise magnets strapped to the side of water pipes, which do not claim to soften water, but alledgedly 'condition it' in such a way as to prevent lime-scale deposition. This relies on the notion that liquid water has some form of stable, supramolecular structure which can be modified, and is patent nonsense on a par with homeopathy. It strikes me however, that water can, in principle be softened by electromagnetic means, thus avoiding the need for ion exchange resins (which must be regularly replenished). 2 possible devices are proposed.

1: A large tubular reservoir of water is surrounded by the coils of an electromagnet. The water molecules carry no net charge, and are therefore unaffected by the magnetic field, but the charged ions which make water 'hard' experience a force perpendicular to the field lines which compells them to move in circles. As long as the original motion of the ion has a component which is parallel to the field lines, this motion becomes a spiral (either clockwise or anticlockwise depending on the sign of the charge on the ion). Inevitably the ions tend towards the upper or lower extremes of the cylinder, while the middle region is correspondingly depleted. By placing outlet pipes in appropriate positions on the cylinder, and assuming a suitably low flow-rate, hardened or softened water can be fed to different outlets. (I would suggest toilets and outside taps for the hard water, and showers and kitchen supplies for the soft).


2: Much the same effect could be achieved by employing a large voltage at either extremity of the cylindrical reservoir, in place of the electromagnet, effectively creating a very large parallel plate capacitor. The potential difference required may be somewhat hazardous however.