The actual simulation is pretty much the same that what we had in the
unimodal case: we will have a series of inference-production cycles,
which we will iterate until something interesting happens (or not).

Therefore, we can basically take the same code we had for the unimodal
distribution (but this time, the definition of **production** and
**inference** are different).

# initialize values for loop
n <- .999 # the initial value for n
N <- c(n) # this where we will store the values we will plot; we add n
k <- 200 # the number of iterations
while(k>0){
n <- inference(production(n)); # make a learning cycle, assign the result to n
N <- c(N, n); # add the new n to the end of the list N
k <- k-1; # decrease the counter by one
}
plot(N)

Now, we see that - if we start with 0.999 - the curve does not drop
all the way down to 0, but seems to converge to something like 0.6. We
also see that R by default only shows the area where there are some
values. We will see below how we can correct this behaviour.

Another question we may ask is what happens with other values, and
what is the global behaviour of the system. We can explore this if we
do not only plot one starting value, but several of them at regular
intervals - let's say one at 0.1, another at 0.2, another at 0.3,
etc., up to 1. The easy - but tedious - way would be to iterate the
code snippet above and to start the loop again with another starting
value. This is some copy and paste work, but for ten values, this
might be possible. However, there is a less verbose way of achieving
it, and I will show how it words by making 100 different starting
points (which would be quite tiresome to write out).

The key to doing that is the **sapply**-function. It takes as first
argument a function, and applies it then to each element of a vector
one by one:

Our function will be the evolutionary while-loop (which we will have
to transform into a function), and the vector will be composed of the
different starting points. Lets start with the function. We can
rewrite the while-loop as follows:

# evol is the function that wraps the evolution of each separate
# starting value of n, and checks what happens through k generations
evol <- function(n) {
k <- generations; # the counter, or: the number of generations to be tracked
N <- c(n); # the list we will plot
while(k>0){
n <- inference(production(n)); # the inference-production cycle
N <- c(N, n); # update N with result
k <- k-1; # decrease counter
}
# inside a loop, you need to print() the points, otherwise, there
# will be no output:
print(points(N), add=T) # we don't want to overwrite previous plots
return(N)
}

The **evol** function contains the while-loop plus the setting of
generations and the vector N we want to plot. It will print out the
points of the plot, and return the vector of values N (if we want to
inspect what happened to the values).

Notice that in the loop, I have not used **plot**, but **points** (I could
have used **curve**). The difference is that **plot** creates a new
plotting window, erasing anything that you had before, whereas
**points** (and **curve**) can add to the existing window.

Let us now look how we can construct the vector of starting points.
Obviously, we could just have written it out:

vector <- c(0, 0.01, 0.02, 0.03) # and so on

But this really is annoying. Imagine you needed to check 1000 starting
points! Fortunately, there is an easier way to do that, with the **seq**
function:

seq(start-value, end-value, step)
seq(0, 1, 0.01)

So, the last expression will give us a vector that starts with 0, and
then increases by 0.01, adds that result, repeats, until we obtain a
vector with 101 members (0, 0.01, 0.02, … 0.98, 0.99, 1). We can now
put together the pieces, and write:

generations <- 100
# initialize values for loop
n <- .999 # the initial value for n
N <- c(n) # this where we will store the values we will plot; we add n
k <- generations # the number of iterations
while(k>0){
n <- inference(production(n)); # make a learning cycle, assign the result to n
N <- c(N, n); # add the new n to the end of the list N
k <- k-1; # decrease the counter by one
}
plot(N,ylim=c(0,1)) # ylim sets the upper and lower end of the y-axis
sapply(seq(0,1,.01), evol)

And this is what it looks like:

Notice the ugly Moiré pattern, particularly in the upper half. In
practice, it is better to make less plots (I did 10 in the
publication), to get a cleaner general look. We will see below how one
can improve the graphics (and the axis labelling) below. Let us first
look at what the graphics itself shows us.

In previous plots, the values either dropped or remained stable. In
the picture below, some values actually *rise*. How come? Is there
something wrong with our coding? Normally, speaker-overestimation
should make the value of *n* drop, and here, at some places, they
rise, even though the speaker overestimates each curve individually,
as we saw above. Now, this might be an instance of numerical error.
But it might just as well be a genuine property of the system. Can we
know which one it is?

Indeed we can. The secret is to calculate the *intercept* (as Grégoire
Winterstein was kind enough to explain to me) between speaker and
hearer CDRs. The idea is the following: if the speaker probability
distribution has more probability mass above *n* than the hearer
probability distribution, the value of *n* will go down; if they are
equal, *n* will be stable; if the hearer distribution has more
probability mass above *n* than the speaker distribution, *n* will
rise.

This is once again something we can plot. The CDR (in R, for a
beta-distribution, **pbeta**) gives us the probability mass up to some
point; so we can use this as a basis.

# the speaker's CDR
pmyspeaker <- function(x) {
(pbeta(x,18.5,3) + pbeta(x,4.5,22.5))/2
}
# the hearer's CDR
pmyhearer <- function(x) {
(pbeta(x,20,3.5) + pbeta(x,3.5,20))/2
}
# pmyspeaker gives what is below x; I want to know what is the
# probability mass above it, which is 1-pmy{speaker,hearer}
pintercept <- function(x) {
(1 - pmyspeaker(x)) - (1 - pmyhearer(x))
}
curve(pintercept, xlim=c(0,1)) # plot the function, with values for n from 0 to 1

We see that it looks like that most of the time, the speaker's CDR is
higher than the hearer's CDR. However, the plot is not very legible,
mainly because we do not see very well the bottom line. Crucially, we
would like to know what happens at the interval where the values rise.
We can add a straight line with the following command:

# inserts a straight line at 0, which is horizontal, and dotted
abline(h=0, lty="dotted")

And we are not interested in the whole area from 0 to 1. The above
picture makes it rather clear that the values should drop on wide
ranges (as they do, in fact); we want to zoom in on the specific
region where they rise, which is roughly between 0.44 and 0.60.

plot(pintercept, .44, .605, # gives us the boundaries
main = "Detail of the plot above", # gives the plot a heading
ylab = "difference", # label the y-axis
xlab = "n") # label the x-axis
abline(h=0, lty="dotted") # add a dotted baseline for x=0

So it turns out that in the region where *n* rose, actually the
speaker's probability mass is less than the hearer's, and that
therefore, *n* will rise. Here, it comes handy that R auto-adjusts for
the values: actually, as the y-axis makes clear, the difference
between the two CDRs is actually *very* small.

We know now what we wanted to know. Let us now see how we can show
others in a decent way what we know, and how we can save the result of
our work. You are not obliged to work on the R command line; you can
also create a file with your favourite text editor, and tell R to
execute your commands. For instance, copy and paste the code below
into a file, that you call "intercept.r". Then, in your R command
line, call

source("/path/to/intercept.r")

(where you replace /path/to with the place the file is located on your
computer, e.g., source("C:\\\stuff\intercept.r"), or
source("/home/gummybear/intercept.r"))

# we will create a svg file, called "intercept.svg"
svg("intercept.svg")
# we define the function we will plot (the same as above)
pintercept <- function(x) {
(1 - pmyspeaker(x)) - (1 - pmyhearer(x))
}
# put two plots one below the other
par(mfrow=c(2,1))
# plot the function, add a heading and axis-labels
plot(pintercept,
main = "Subtracting Hearer CDR from Speaker CDR",
ylab = "difference",
xlab = "n")
# Add some lines to make clearer how things work
abline(h=0, lty="dotted")
abline(v=0, lty="dotted")
abline(v=1, lty="dotted")
abline(v=0.455, lty="dotted")
abline(v=0.5986, lty="dotted")
# and add some arrows, which explicit the system behaviour:
arrows(x0=0.82, y0=0.04, x1=.78, y1=0.04, length = 0.05, angle = 25,
code = 2, col = par("fg"), lty = par("lty"),
lwd = par("lwd"))
arrows(x0=0.51, y0=0.04, x1=.55, length = 0.05, angle = 25,
code = 2, col = par("fg"), lty = par("lty"),
lwd = par("lwd"))
arrows(x0=0.32, y0=0.04, x1=.28, length = 0.05, angle = 25,
code = 2, col = par("fg"), lty = par("lty"),
lwd = par("lwd"))
# end of first plot
# start second plot
plot(pintercept, .44, .605,
main = 'Detail of the first plot',
ylab = 'difference',
xlab = 'n')
# note that it does not make any difference if you use "", or '' for
# the text
abline(h=0, lty="dotted")
abline(v=0.455, lty="dotted")
abline(v=0.5986, lty="dotted")
# close the device, in order to make sure the svg gets properly printed:
dev.off()

After having run **source** on the file, you should obtain a svg-file
called *intercept.svg*, which should look like the following: