For Christmas, I bought my Dad an Amazon Echo Dot. Its listening capabilities really impressed me, and it got me thinking about what features it offered to developers. After looking into the Alexa Skills API, and seeing how easy it was to get started, I decided I just had to buy one for myself. But I needed to justify it somehow…

A few weeks ago I decided that manually opening and closing my binds every day was simply too much for me to do anymore. Not only did I have to pull the cord to open the blinds, I also had to position myself next to the blinds in order to reach the cord. It was a significant time waster, consuming an estimated 10 seconds of my life each day just simply opening and closing blinds. I did some important calculations and discovered that if I spent 10 seconds a day on the frankly old-fashioned task of opening the blinds myself, I’d be wasting * 70 seconds a week*. That adds up to approximately

Here’s a diagram of the system I designed:

Not strictly speaking a “front end” in the typical sense, but for the purposes of describing the system it makes sense to treat all the “Amazon” stuff as the front end, because there’s not really any development required here. Amazon really shines here by making this step easy to get the hang of. I encourage you to read the documentation, but simply put you create an Alexa Skill and link it to trigger an AWS Lambda function. The general idea here is to use the Alexa Skills Kit portal to configure how you want to interact with Alexa – the key aspects being the “invocation name” (can be thought of as the app name) and the “interaction model” (which is essentially a mapping between what users say to your app and what methods are called on the AWS Lambda function. Easy! (Note, for non-US users, make sure you match the language of the skill to the language you configured for your Echo, otherwise the skill wont show up on your device!)

This encompasses the AWS Lambda function and my own “relay” server. Quite simple really, the Lambda function just makes a HTTP POST to my relay server. The relay server, you guessed it, relays the request to the Raspberry Pi, where the real magic happens.

By far the hardest part of this project was interfacing between the blinds chain and the motor. Fortunately, I found someone had made a 3D model of a gear which interfaces with my blinds perfectly:

https://www.thingiverse.com/thing:467647

I had it printed out by a small 3D printing shop here in London, and after a small modification we were in business. I then had to somehow suspend a motor in the correct place to interface with the gear. Easier said than done, since I didn’t want to drill into any walls. I came up with a frankly hideous solution involving a weight, a wooden plank, 2 clamps and a piece of string. It’s so specific to my situation it’s not even worth talking about, but you can just about see it in the video at the top.

The rest was more or less straightforward. I realised the blinds were a lot heavier than I thought when a stepper motor failed to provide enough torque to pull them up, so I went a bit overkill:

- 13A motor controller: http://www.robotshop.com/uk/cytron-13a-5-30v-single-dc-motor-controller.html
- 7,92W DC motor: http://uk.rs-online.com/web/p/dc-geared-motors/4540849/

That sends the blinds up and down with ease, though it’s a little bit loud…

The final bit of hardware: something to know where the blinds were. Originally, I was going to put a magnet on the gear and have a reed switch detect how many revolutions the motor made. I’d then calculate the revolutions needed to open and close the blinds, which would allow me to open the blinds halfway. This required a lot more effort to mount the magnet and the reed switch precisely, so I opted for a simpler solution: a simple lever switch mounted at the base. When the binds make contact with the switch, we know we’re at the bottom of the run. I could then time how long it took to open the blinds, and be more or less confident in it’s accuracy, since it would be operating from the exact same point each time, guaranteed by contacting the switch at the bottom. Note: doing away with any kind of feedback loop would be a disaster. If you just timed the up/down travel time, you’d introduce inaccuracies (since the travel time down was not equal to the travel time up). Over the course of opening and closing the blinds a few times, unless you somehow timed with absolute precision and accuracy and there was absolutely no fluctuation in the time taken, you’d quickly find the blinds lose their “home” position and start drifting out of their operating range.

With that sorted, all that was left was the software. I wrote a simple script in Python to control the motor. The control script took the responsibility of knowing where the binds were, so if I asked them to open once then open again, the control script would refuse to open the blinds (actually it was even more simple – if the switch wasn’t currently pressed, the control script would assume the blinds were open and would only close).

That’s it! The code isn’t anything special, so it’s not worth posting. Please ask if you have any questions though!

]]>So first of all, some basic definitions.

**Stock** – In this context, a “stock” is a holding in a company that might be traded through the stock market

**Portfolio** – A collection of financial assets. In this context, I’ll be talking in terms of holdings of financial stocks. So for example, £30 in company A, £70 in company B. For the purposes of this blog post, it is better to think in terms of percentages. In the previous example, that would therefore be 30% in company A and 70% in company B if I had a total portfolio of £100.

Markowitz Portfolio Optimization (modern portfolio theory, MPT for short), is a theory used in the real world to decide on how best to invest your money in a given set of stocks (Markowitz, 1952). It works with two underlying variables: risk and return. It attempts to maximize returns of a portfolio for a given allowable risk, or minimize risk of a portfolio for a selected return. For example, you may say you want a return of 10%, and you want to invest in a selection of, say, 20 stocks. Using some relatively straightforward calculations that we’ll cover, we can find the optimum portfolio (I.E. the proportions for which we invest our money in each stock) in which the risk of losing money is minimized as low as it can be while still reaching our target return.

Sounds useful! We tell it the stocks we’re interested in, and it tells us how much to invest in each one according to our desires risk or our desired return. But how do we define risk and return? In order to do so, we need to define some mathematical variables. Throughout, we’ll assume we’re interested in only three stocks, A B and C.

* w* – A vector representing our portfolio

* R *– A matrix containing the historical return data of each stock, where each column represents an individual stock, and each row of the column represents the return of that stock at a point in time.

* m* – A vector representing the average returns of the individual stocks on historical data over a period of time. This is calculated over a time period, and it is calculated by averaging the proportional change over a time period. For example, we could take the value of stock

We can perform this calculation in MATLAB like so:

```
% Ntraining = number of samples to use in training w
% R = historic stock data samples
mean( R(1:Ntraining, :) );
```

**C** – A matrix representing the covariances (AKA covariance matrix) between all of the stocks, calculated from historical data over a period of time. If you’re unfamiliar with covariance matrices or you need a refresher, here is a good online resource. While the formulation of such a matrix is not important to remember since it can be easily calculated using MATLAB or other tools, it is important to understand the significance of the covariance matrix in this context. Put simply, the covariance is a measure of the correlation^{1} between two variables. If two variables go up over time, they have a positive covariance. If one variable goes up and the other goes down, they have a negative covariance. In this context, the covariance matrix represents the correlations between stocks A B and C. One could say that element C_{i}_{j} is a measure of how stock *j* correlates with stock *i*. **C** is symmetrical, since the correlation of stock* j* with stock *i* is the same as the correlation of stock *i* with stock *j*, I.E. C_{i}_{j} = C_{ji}. The diagonal of the matrix, I.E. when *i = j*, is the variance of each stock. The variance of a stock can be thought of as how “risky” that stock is. If it fluctuates a lot, we might consider it risky to invest in it. We’ll come back to this assumption later. If this all seems a bit confusing, that’s okay. Just remember that **C** is a variable that stores the correlation between each stock (A and B, A and C, B and C) and the variances of each stock (A B and C).

We can perform the calculation of C in MATLAB like so:

```
% Ntraining = number of samples to use in training w
% R = historic stock data samples
cov( R(1:Ntraining, :) );
```

Now that we have some underlying definitions and variables to use, we can define what exactly risk and return are.

Firstly, it is helpful to point out a relationship between the returns of a stock, * R*, and the returns of a portfolio. The returns of a portfolio is defined as

(1)

The resulting **ρ** is a vector in which each row represents the returns of that particular day, given the weighting of investment of each stock. For the purposes of MPT, all we need to know is that **ρ** is a linear transform of * R*.

Secondly, we need to make an assumption about how we model the returns * R* when estimating future

Where:

= mean (* m*)

= variance (**C**)

We also use the following relationship:

Using this relationship, and the relationship in (1), we have:

Substituting mean and variance:

Still with me? If so, the rest will be a breeze. If you didn’t quite understand the derivation above, don’t worry too much. Just make sure that you understand the two meaningful terms we obtained, return and risk, from the above equation:

represents the *mean return* of the portfolio. This is what we refer to when we use the term “return”. We’re multiplying the mean return of each stock, *m _{i}*,by its corresponding weighting,

represents the *variance* on the mean return of the portfolio. This is what we refer to when we use the term “risk”. Put simply, it’s a measure of how much stocks in the portfolio fluctuate over time. Stocks that fluctuate a lot have a higher variance and therefore are a higher risk. This is the fundamental assumption of MPT. This assumption has been criticised, but let’s assume that it’s true for now. It’s nice to work with, since it’s intuitive that a stock that jumps around a lot is more risky than one that steadily moves in a particular direction. There isn’t a nice scale to work with here, like there is with the returns that are based on percentages. However, we can just compare risks relatively to decide which stock is less risky than other.

So, now we finally have a way to calculate **return** and **risk** of a portfolio. We base these calculations on historic data (represented in * m* and

For those unfamiliar with optimization problems, this simply says “*Find the portfolio weights, w, with the minimum possible risk that gives me return r_{0}*“. The good thing about this minimization equation is that we’ve now got a succinct mathematical way of representing exactly what we said in words before. There are many mathematical tools out there that can compute such problems, and below is an example using the CVX toolbox for MATLAB which performs the minimization problem. CVX is nice because it has an intuitive interface which is quite similar to the mathematical representation.

```
cvx_begin
variable w(N) % N = number of assets
minimize(w'*C*w)
subject to
w'*ones(N, 1) == 1,
w'*m == r0,
w > 0;
cvx_end
```

The first constraint in the above code ensures that the elements in * w* add up to 1 (100%). Of course, we cannot have 110% of our money invested in a portfolio, and we also would not want only 90% of it invested, for example. The second constraint is the constraint we’re already familiar with, the part that states that we must obtain the return we’re seeking, r

So, what is the result? Running the above code gives us some weights in * w*, representing the portfolio which would give you the

There’s another way we could approach optimizing our portfolio which I mentioned at the start. Rather than seeking the portfolio that gives us the lowest risk for a particular desired return, we can instead seek the portfolio which gives us the highest return for a maximum risk limit. Mathematically, we can write this as such:

Similarly to before, this formula simply states “*Find the portfolio weights, w, that give me the maximum possible return that has a risk of σ_{0}*“. Again, numerous mathematical tools exist to solve this kind of optimization problem. The code for CVX is as follows:

```
cvx_begin
variable w(N) % N = number of assets
maximize(w'*m)
subject to
w'*ones(N, 1) == 1,
w'*C*w == s0,
w > 0;
cvx_end
```

This code is very similar to the previous approach, only the risk and return terms have been swapped, and the risk constraint is s0 (σ_{0}). Think about what might happen if we removed the risk constraint. If you want to know the answer, hover over this footnote^{2}.

Hopefully you’re still with me by this point. So far, we’ve calculated two different portfolios: one in which we want to minimize risk for a given return, and one where we want to maximize return for a given risk. This is all well and good, but clearly there are some limits to what we can choose for our risk or return constraints. For example, we can’t choose to have a 200% return and expect a portfolio to be able to achieve that, no matter what risk we allow. Likewise, how do we choose a sensible limit for risk, especially since it’s dimensionless? This is where something called the *Efficient Frontier* comes in. The Efficient Frontier is a characteristic we can calculate and plot which demonstrates, for a particular selection of stocks, what the best possible trade off between risk and return is. An example of an Efficient Frontier is shown in Figure 1.

**Figure 1 – Efficient Frontier example**

This demonstrates the maximum realizable returns for a range of risks. We can see that, as we increase our risk, we can obtain better returns – which intuitively makes sense. The Efficient Frontier, represented by the blue line, represents the most efficient trade-off between risk and return possible for a particular selection of stocks. All of the points on the line represent the maximum possible realizable returns for their corresponding risks. Of course, we cannot select a point above the line (I.E. we cannot seek a return greater than the point on the Efficient Frontier for a given risk). We could indeed select a point below the line (I.E. accept a lower return for the same risk), but this is inefficient. So, in order to find some sensible values to plug into our calculations in the MATLAB code above, we can plot the Efficient Frontier, choose a good trade-off between risk and return by selecting a point on the line, and plug the value of risk in to our code above by setting s0 to it.

But how do we calculate the Efficient Frontier? The steps are outlined below.

- Find the portfolio with the maximum possible return, unconstrained by risk
- Calculate the risk of this portfolio, this will be the maximum possible risk

- Find the portfolio with the minimum possible risk, unconstrained by return
- The risk of this portfolio will be the minimum possible risk

- Make a list of risks by stepping from the least possible risk calculated in step 2, up to the risk calculated in 1 and incrementing in small steps, forming our x axis
- Using each risk in the list as a constraint, calculate the portfolio with the maximum possible return for each risk

The MATLAB code to perform this is shown below. The code is adapted from (Brandimarte, 2006), but I have replaced equivalent parts with CVX code since that’s what we’re now familiar with.

```
function [PRisk, PRoR, PWts] = NaiveMVCVX(ERet, ECov, NPts)
ERet = ERet(:); % makes sure it is a column vector
NAssets = length(ERet); % get number of assets
% vector of lower bounds on weights
V0 = zeros(NAssets, 1);
% row vector of ones
V1 = ones(1, NAssets);
% Find the maximum expected return
cvx_begin
variable w(NAssets)
maximize(w'*ERet)
subject to
w'*ones(NAssets, 1) == 1;
w > 0;
cvx_end
MaxReturnWeights = w
MaxReturn = MaxReturnWeights' * ERet;
% Find the minimum variance return
cvx_begin
variable w(NAssets)
minimize(w'*ECov*w)
subject to
w'*ones(NAssets, 1) == 1,
w > 0;
cvx_end
MinVarWeights = w
MinVarReturn = MinVarWeights' * ERet;
MinVarStd = sqrt(MinVarWeights' * ECov * MinVarWeights);
% check if there is only one efficient portfolio
if MaxReturn > MinVarReturn
RTarget = linspace(MinVarReturn, MaxReturn, NPts);
NumFrontPoints = NPts;
else
RTarget = MaxReturn;
NumFrontPoints = 1;
end
% Store first portfolio
PRoR = zeros(NumFrontPoints, 1);
PRisk = zeros(NumFrontPoints, 1);
PWts = zeros(NumFrontPoints, NAssets);
PRoR(1) = MinVarReturn;
PRisk(1) = MinVarStd;
PWts(1,:) = MinVarWeights(:)';
% trace frontier by changing target return
VConstr = ERet';
A = [V1 ; VConstr ];
B = [1 ; 0];
for point = 2:NumFrontPoints
B(2) = RTarget(point);
cvx_begin quiet
variable w(NAssets)
minimize(w'*ECov*w)
subject to
w'*ones(NAssets, 1) == 1,
w'*ERet == RTarget(point), %this time we're targeting RTarget
w >= 0;
cvx_end
Weights = w;
PRoR(point) = dot(Weights, ERet);
PRisk(point) = sqrt(Weights'*ECov*Weights);
PWts(point, :) = Weights(:)';
end
end
```

Alternatively, if you have access to the financial toolbox, you can use the MATLAB function *frontcon* to calculate the efficient frontier for you. The two approaches give identical results, but *frontcon* is faster.

You could, as mentioned previously, simply pick a desirable risk from the Efficient Frontier and use the weights associated with it. Alternatively, you could use the Sharpe ratio to determine the optimum trade off between risk and return by calculating the Sharpe ratio for each portfolio that exists on the efficient frontier. The Sharpe ratio (Sharpe, 1994) is defined as:

*r _{p}*

*r _{f}*

*σ _{p}*

The optimum portfolio out of all the portfolios calculated on the Efficient Frontier is the one which gives the largest Sharpe ratio. Simply iterate over the elements in the efficient frontier, calculate the return for each, and pick accordingly, like so:

```
%calculate the efficient frontier
[risk, returns, w] = frontcon(m, C, 100);
%calculate Sharpe ratio on the training data, and extract the best
%performing portfolio
for i=1:length(returns),
sharpeTrainingRatios(i) = returns(i)/risk(i);
end
%extract the most efficient portfolio
[maxSharpeTrainingRatio, maxSharpeTrainingRatioIndex] = max(sharpeTrainingRatios);
efficientPortfolio = w(maxSharpeTrainingRatioIndex, :);
```

We saw the foundations and assumptions that MPT is based upon, the goals for optimizing a portfolio, and the formulas for calculating risk and return. We saw how to use these formulas to derive optimization problems that we can express programmatically to obtain an optimized portfolio. Finally, we saw how to find the best trade-off for a set of portfolios using the Efficient Frontier and the Sharpe ratio. Now it’s up to you to go and implement these tools, and see for yourself if you can make a good investment!

Markowitz, H. (1952). Portfolio Selection. *The Journal of Finance*, 7(1), p.77.

Brandimarte, P. (2006). *Numerical methods in finance and economics*. Hoboken, N.J.: Wiley Interscience.

Sharpe, W. (1994). The Sharpe Ratio. *The Journal of Portfolio Management*, 21(1), pp.49-58.

Radio controlled toy cars don’t normally have a Raspberry Pi, 10 batteries and a stripboard loosely hanging off them, but this isn’t your standard RC car.

The primary goal of ICC was to enable a toy car (which has now become a robot, as seen in the image above) to be controlled in real-time over the internet. I did just that, and you can try it out right now!

Come and drive my Internet-Controlled Car: http://projects.bitnode.co.uk/ICC/

Read on for a technical write-up of how I completed the project.

I didn’t want to create my own chassis since the job I could do would have been far less robust than using some pre-existing model as a base. So, I opted for modifying something that already existed. ~~The only toy car I had lying around wasn’t exactly anything special – just a radio controlled car I was given for a present when I was about 9. It has 2 DC motors – 1 for forwards/backwards drive of the rear wheels, and 1 for controlling the steering direction of the front wheels. The steering motor never makes full rotations, and just simply pushes a pinion which drives a rack connected to the steering mechanism. In short: it’s quick and simple, and that makes it very simple for us.~~ This is no longer the case. Instead, I’m using a Dagu robot chassis, as pictured above.

So, the aim was to modify this car in such a way so as to allow it to be controlled remotely over the internet. Actually, that isn’t quite true. I wanted it to be controlled by any number of people on the internet (one at a time, of course – responsible driving). I set out a rough specification, and got to work.

I set out a list of requirements for the end result:

- The car should be able to be controlled over the internet
- Quite key

- The car should be totally wireless
- WiFi is the obvious choice for this

- The car should be controlled from a webpage which allows a queue of users
- If this is to work for any number of users and not just myself, then this is required

- The car should have a mounted camera to allow the user to see where they’re driving
- Obvious, but difficult to actually get right, as we’ll see…

The above can all be achieved using a single Raspberry Pi running from batteries with a WiFi dongle, a camera module, and some external motor control circuitry. Here’s a pretty diagram:

Now to make it happen…

So, the users need to see where they’re going. What I learned from the last time I tried this experiment was that this is quite hard to get right. With this in mind, I got to work on this aspect first before moving on with the rest of the project.

Last time I approached this problem, I went down the easy route: I mounted my phone on a table and used the Ustream app to broadcast a live feed of the room. That was a disaster. Not only was it apparently deceptively difficult to control the car from a fixed location, but the Ustream service added in about 10 seconds of delay. That meant my poor users were driving practically blind, and could only see that they were running into my cat and driving up my chimney 10 seconds after the fact (though I still have a sneaking suspicion that some of that was on purpose). It was easy to get up and running, but Ustream really isn’t designed for this type of streaming.

So, this time I tried something new. I was already planning on using a Raspberry Pi to facilitate easy WiFi connectivity, so I opted for a Raspberry Pi camera module. I was extremely impressed by the camera module – it was incredibly easy to set up and get going. Now I just needed to get a live stream of the camera over the internet. ‘Just’.

Firstly, I tried VLC. VLC is great, and its streaming capabilities were a piece of cake to use. Unfortunately, I couldn’t find anything to play the stream types it provided in a browser (though it worked well from VLC server to the VLC client!). So that was out of the window, since I wanted to be able to embed the stream directly in the webpage. Next I tried FFmpeg. Also another brilliant piece of software, seemingly ruined by the Libav/avserver software, which is a fork of FFmpeg. Maybe that’s an unfair/sour judgement, but Libav didn’t work on my Pi (it would crash shortly after starting the stream), and the server/relay aspect, avserver, would simply crash as soon as it started up. So, after much faffing, cross compiling, and configuration, I got FFmpeg setup and working instead. Except my Pi couldn’t handle encoding things like FLV streams (due to a lack of processing power), and FFmpeg’s MJPEG streaming is seemingly broken too. Even when FFmpeg worked, it introduced an unacceptable delay on the stream, which as I mentioned previously, is only good if you’re okay with terrified cats and sooty cars.

Disgruntled, frustrated and possibly slightly sleep deprived, I couldn’t think of another way to continue with VLC or FFmpeg, despite many more hours of debugging, and I decided to give up on that approach. I’m 99% certain there’s some way to force the FFmpeg server to do the transcoding and for the FFmpeg client on the Pi to just send raw data, but I was tired of endless bugs and strange errors. I just wanted a simple image stream from my Pi, which I could then relay to N clients. I didn’t really care about the fancy stuff FFmpeg and the likes could do, because this project doesn’t really need them. Fortunately, I stumbled across a fork of mjpeg-streamer, which is a nifty bit of software designed to create an MJPEG stream from the Raspberry Pi camera. It all built easily and worked very reliably with minimal setup and minimal resource usage. I can’t recommend it highly enough for these types of projects, so go and give this project a star: https://github.com/jacksonliam/mjpg-streamer.

I decided that, despite its inefficiency, MJPEG was the best approach, since it required very little CPU processing and had a simple format, which became very helpful when I set to work on the next problem: relaying the stream to N clients. One of the original goals of this project was to create a sense of interaction with other users, so I thought it would be fun if not only the driver saw the camera stream, but all the users on the web page (also that way, everyone can judge your driving). However, I didn’t want to burden the Pi (or my home upload speed) with streaming to too many clients, so I decided to try and re-broadcast a single stream from the Pi to N clients. I found an old bit of code online which claimed to relay MJPEG streams, but as was the trend with this project, it didn’t seem to work either. At this point I was prepared to make my own solution, so that’s what I did. I read up on the format of MJPEG (which is very simple!), and I set to work on a Python script which simply connected to the MJPEG stream provided by mjpeg-streamer, and relayed the raw data to connected clients. It’s mostly straightforward, you just have to respect JPEG boundaries of the individual frames in the stream and ensure the clients get the right headers. After that, the streaming is just a case of relaying whatever you get from the MJPEG stream source. The code is here if you’re interested: https://github.com/OliverF/mjpeg-relay.

So to summarize:

- mjpeg-streamer acquires the feed from the Raspberry Pi camera module, and makes it available as an MJPEG stream
- mjpeg-relay reads the MJPEG stream and relays it to N connected clients

What’s really nice about the above is that because there is no transcoding, there is very little delay. Even relaying the stream to my VPS in the Netherlands and back again introduces a largely unnoticeable delay. As a bonus, most good browsers support MJPEG embedding directly into a webpage, so no Java applets, JavaScript code, or other workarounds are required.

The camera was set up and working happily, now I had to make the Pi control the motors. The general approach was to use a H-bridge motor controller IC to control the two DC motors within the car.

- The Pi model B draws about 500mA.
- The camera modules draws about 250mA.
- The WiFi dongle draws another 250mA.

This totals up to roughly 1A, and that’s not even considering the motors. In my design, I designed the power supply for the Pi/motor controllers to be separate from the motor power supply. You can just use a single power supply, but watch out for back-EMF.

So that I didn’t burn through the world’s supply of AA batteries in a single day, I opted for rechargeable NiMh AA batteries. 6 for the Pi/ICs, 4 for the motors (since the car was originally designed to power the motors with 4 batteries and this way I only needed one voltage regulator for the Pi/ICs).

One thing I noticed while designing this stage was that there seems to be no formal way of representing a stripboard layout. I ended up with a rather cheesy looking yet understandable layout, shown below. The layout was made using made using DIYLC, a prototype layout tool.

- The block on the right is the 26 pin GPIO header, which is used to connect to the Pi (header, cable).
- Q1 is a 1.5A, 5V fixed voltage self-contained switching regulator module (datasheet).
- This regulator allows us to use a battery pack of >4 1.2V batteries (remember, rechargeable NiMh batteries are 1.2v), while still providing the 5V required by the Pi and motor controller ICs.
- Note: 4 perfect 1.2V batteries would provide the 5V required by the Pi, but
**never connect batteries to the Pi directly in that manner – always use a regulator as shown in the layout above. Even NiMh batteries which state 1.2V on the side will produce ~1.4V each when fully charged, so using 4×1.4V = 5.6V may at worst damage/kill your Pi, and at least just simply become unstable after the batteries begin to drain.** - As previously mentioned, the system draws ~1A. To play on the safe side, I used a 1.5A regulator.

- IC1/IC2 are H-Bridge motor controllers (datasheet).
- This allows a motor to be driven in either direction, depending on the logic levels of 3 inputs. The details are specified in the datasheet.
- There is only 1 enable pin on each IC, so you can’t run the drive motors without also powering the steering, and vice versa.
- You could choose a different motor controller perhaps with 2 separate enable pins – I just happened to have these.

- My design includes 2 separate power supplies, one for the motors and one for the Pi and motor controllers. The supply for the ICs and Pi is regulated, while the motor supply is connected directly.
- You can just use a single power source, but watch out for back-EMF.

I soldered it all up, and connected the car’s motors to the stripboard. To my surprise, it all worked first time!

I initially opted for the use of a linear regulator for Q1, but I discovered that it was getting too hot during usage. It was dissipating about 4V at 1A = 4W, which according to its datasheet, should have pushed the temperature of the regulator up to 200C. Oops. I switched it out for a direct “drop in replacement” pre-built switching regulator module (linked above), which has a much greater efficiency, thereby reducing the heat of the regulator while conserving battery life.

The hardware and visuals were ready, now I just had to control the motors from the Pi. Using the datasheet for the H-bridge motor controllers, it was just a simple case of setting the right outputs depending on the direction/action required. The code can be found here: https://gist.github.com/OliverF/f0a75ed4dd38c029b779.

The front end is made entirely in JavaScript. It connects using WebSockets to a node.js server, which then relays commands via TCP to the Pi. The node.js server handles the user queue to ensure only one user can control the car at a time. The code can be found here: https://gist.github.com/OliverF/ddc88eae83675ae3aac4.

]]>

This is a project I worked on around June 2014. It’s written in JavaScript, and uses the IvanK graphics library. The nice artwork isn’t my own! I used the Space Shooter Redux sprites, available here.

Mainly just a project I used to practice JavaScript, specifically OO JavaScript using the base.js library. Coming from nice OO languages like C#, I felt a bit hindered by the prototype based inheritance JavaScript offers. base.js sets out to help you there, well worth looking into!

The **singleplayer** code is available on GitHub: https://github.com/OliverF/astrovoids

The **singleplayer** version is available here: http://projects.bitnode.co.uk/Astrovoids

(Use WASD to move, space to shoot)

The multiplayer version is where it gets interesting. Unfortunately, I’m not happy with the performance of the multiplayer version right now, so I’ll post an update about that later.

]]>But, where’s the fun in that? Fast-forward a year to 2014, and I finally convinced myself to buy a Raspberry Pi to mess around with. I already have quite a bit of experience with Linux, so I wasn’t planning to use it to learn Linux. In fact, I didn’t know what I’d use it for until the night before it arrived when it hit me: I could use it to make a much better door lock! And so when it arrived, along with a servo I happened to order with it, I got to work. Here’s the end result:

At the start of the video, you see the Raspberry Pi on the left, the servo/circuit housing in the middle, and the lock itself on the right. In the second half, you can see the keypad connected via a ribbon cable to the circuit housing.

Features:

- Keypad code locking/unlocking
- Web interface to control and view status from any internet connected device

- Easy to update (thanks to the Raspberry Pi itself being a Linux device)
- Mains powered with a single 5v power source
- GPIO ribbon cable/socket for easy removal of the Raspberry Pi

Read on to find out how it works.

**The RPi (Raspberry Pi)** controls everything. It receives power via the GPIO header, meaning the servo and the RPi can share a single power source. It’s also connected to my home WiFi network via a WiFi dongle. The RPi runs a Python script which listens for UDP data from a specific server IP, and interfaces with the keypad to listen for physical input. If the key combination is correct or the UDP data is correct, then the door lock can be controlled.

**The keypad** is a simple £3 matrix keypad. Hook up the columns to RPi outputs, and rows to inputs. Set a column to high, and then look for input on all of the four rows. If a given row is high, you know the column and row of the key pressed and can infer the number pressed.

**The lock** is controlled with some string and a non-continual rotation servo (which simply means a PWM duty cycle input to the servo corresponds to a target angle)

**The servo/circuit housing** connects the RPi to the servo, keypad, and power. You can see the GPIO header is connected via a ribbon cable to the circuit housing, where it’s plugged into a socket mounted onto some stripboard inside there. The circuit housing is connected to a 5.0v USB mains power adapter (phone charger!), which then feeds the servo and the RPi (via the GPIO) with power, making one neat mains power connection.

Try it out here: http://projects.bitnode.co.uk/KNN/

If you’re unfamiliar with it, K-NN is a part of machine learning. Specifically, it can be used for classification (I.E does this belong to x or does it belong to y). The points you see on the map are training data. They essentially define the boundaries for classification, so that if we were to bring an unclassified point into the data set, we could decide which class it belongs to based upon the training data. The map is showing the classification of each individual pixel in the feature space.

So the algorithm for K-NN is really simple. You just look at K training points nearest to the particular input point (in this case, the location of a pixel) and average out the class. So for example if K=3, at pixel (10,10) we simply find the distance from the pixel to all the points of the training set, then look at the nearest 3 points, and average out their class. In this case, class is a colour (red or blue) and so we can just sum up the RGB components of the point individually and divide them by 3, giving us our averaged colour and therefore class which we can assign to the pixel in question.

Here are some interesting outputs:

Also an interesting bug I encountered while making this:

Try it for yourself here: http://projects.bitnode.co.uk/KNN/

]]>Well, useful if you like smashing cars in BeamNG’s Drive: a softbody car physics simulator which is much more fun than it sounds! Here’s the kind of stuff you can do after editing your vehicles:

If that’s sold you already, click here to go to the download.

First, you might need to tell it where your BeamNG Drive vehicle directory is located (just go into Options->Settings), and then it will scan the chosen directory for vehicle config files. The config files are stored as .jbeam, which is some variant of JSON that I had lots of fun regex-ing to convert to valid JSON. After that, you get a screen with a huge tree on the left, and an editor on the right. You can edit values to your liking:

It saves automatically as you type, so you can hop back into BeamNG Drive and hit “CRTL+R”, and the config will be loaded!

- Automated scanning for .jbeam files
- Full .jbeam parsing, allowing for total flexibility of input
- Smart lookahead (so common config arrays will be presented in a multi-column format for easy editing)
- Type detection for boolean values (so that editing is a checkbox) and decimal/integers (so that saving them does not save them as a string, and maintains the original type)
- Tab index to allow you to quickly move between edit boxes by pressing tab
- Auto update checker (can be disabled)
- Component creation by duplicating existing components
- Node deletion
- Custom node adding directly from the editor

- (done) Type detection (so numbers will be a number box, booleans will be a check box, etc.)
- (done) Custom node adding (So you can create vehicles from within the editor)
- (done) Tab index (So you can tab your way through values to edit them easily)

**v0.5**

*Added node adding**Recoded a major portion of the code, improving editing speeds*

Older releases:

**v0.4:**

*Fixed bug where fullsize.jbeam would be incorrectly parsed*

*Added “Create component from…” function, so that parts can be duplicated and modified without editing the original**Patch v0.4.1: Made the “add new component” create JSON friendly validated names**Patch v0.4.2: Added automatic update checking (can be disabled) and “Delete node” option (right click on a node to open the menu)*

DriveEditor (179.0 KiB)

(August 19th, 21:00)

**v0.3:**

*Improved (but not perfected) jbeam parsing. At least now all default files should be able to be parsed**Added tab indexes to text boxes, so you can easily move between edit boxes by pressing tab**Fixed bug where text would be truncated on long labels (oops)**Added type detection, so values will be loaded, presented, and saved as their original types (e.g booleans will be checkboxes, integers will be saved without quotes, etc.)**Allowed for window resizing*

DriveEditor (176.0 KiB)

**v0.2:**

*Added loading screen, made it easier to browse arrays (useful for BeamNG’s use of arrays)*

DriveEditor (176.3 KiB)

**v0.1** – Initial release

BitBak is an automated offsite encrypted Dropbox backup which utilizes TrueCrypt volumes to keep your data secured while storing it on the cloud. The goal is to protect your work from prying eyes and data loss at the same time by automatically encrypting then backing up your files to existing free cloud services. Sold already? Go to download!

If you’re a student like me, you don’t have a great deal of money to throw around. You probably develop your small or individual work either using a local repo, a cheap repo, or no repo at all. A local repo means that all your files are at the mercy of your (or your local server’s) hard drive. Same with no repo at all. A remote, free/cheap repo may one day disappear off the face of the earth, or you may consider the fact that they are cheap is a means for them to steal people’s projects illegally (in which case you should opt for a local repo if you’re the only one working on it). So essentially the risks amount to hard drive damage or theft, or remote service termination (along with all your files if you’re unlucky).

Very simply, it mounts a TrueCrypt volume, looks through all the folders you specify to backup, zips them up, copies them to the mounted volume, and unmounts again. In between this process, excess old backups are removed as per your specification. You can specify the interval at which to run, or you can silently run the program from the Windows Task Manager (just supply the argument “runsilent”) to allow total flexibility of when to run.

At this point, I should make it clear why I use Dropbox this. Dropbox only syncs changed blocks, rather than the entire volume, so adding small files to your volume won’t cost you a few hours of uploading. I should also point out that dynamic sized volumes will prevent a huge upload when you first create it. Of course, both of these choices are entirely up to you.

- Automated interval-timed or manual backups to an encrypted TrueCrypt volume on Dropbox
- Multi-directory backups
- Encrypted key storage for TrueCrypt volume key (so that the volume can be securely mounted and unmounted automatically)
- Old backup archives automatically deleted as per your settings
- Can be run silently via Windows Task Scheduler or batch script by adding argument “runsilent”

- Create a new TrueCrypt volume. Follow the steps in TrueCrypt to create an encrypted file container (‘standard TrueCrypt volume’) inside your Dropbox folder (or anywhere, but Dropbox works best as syncing doesn’t require the entire volume to be uploaded)

- Open Options->Settings to let BitBak know where your TrueCrypt volume resides, what its password is, to add directories to the backup process, and to configure a few preferences.

- Run a backup from the main screen. That’s all! You should be able to see all the backups in your TrueCrypt volume when you next mount it through TrueCrypt.

TrueCrypt and Dropbox are individually associated with thier own license agreements to which you must agree before utilising and installing those services. You must also agree to the license provided in ‘BitBak license.txt’ before using the program.

Versions:

v0.1.1

- Fixed “runsilent” bug (thanks John N. for pointing this out)
- Fixed bug where backups would ignore specified path within archive and appear in the root backup directory

Source code:

BitBakSrc (5.2 MiB)

Older versions:

v0.1 – Initial release

BitBak (5.2 MiB)

Source code:

BitBakSrc (29.5 KiB)

If you get a malware warning, that’s because I’ve requested administrator privileges. If you don’t trust me, read through the source code – it’s only a few hundred lines!

This builds on my previous system, which you can see in my previous post. If you want to learn how to make simple 2D lighting, read that first. If you want to know how to make it 10x+ faster, read this!

So, this post deals with the issues faced with the inefficient method described originally – that was, checking every single ray against every single 2D surface. Imagine doing this manually: draw some rectangles on a page, and for each degree from 0 – 360, you want to draw a ray extending from the center, limiting the length of the ray to the first thing the ray meets. Now assuming you’re a smart individual, you would only consider the rectangles on your piece of paper that are roughly in the direction of the ray you’re drawing. That makes sense, when you think about it: why would you even bother to look at rectangles that are in totally the wrong direction of the ray? Well, that’s what our previous algorithm was doing! Fortunately, there’s a solution. It’s called ‘spatial hashing’. It’s great! Here’s a video of it in action:

We’re getting 60+FPS with 100 rectangles and multiple lights, now, which is a vast improvement. There’s some extra optimizations we can make, in addition to spatial hashing, but we’ll stick with this for now for the sake of simplicity.

Just for a rough idea of just how good this can be:

The old algorithm of checking EVERY surface with EVERY ray regardless of the direction took about ~117ms per light for 400 surfaces

With this spatial hashing algorithm, that time is reduced to ~4ms

A huge improvement, and we still have more improvements we can make! Read more to find out how, and to see the source code.

**[WARNING: This method is terribly inefficient. Please read some of the more recent articles for better methods. Feel free to read this method to understand the basics!]**

As explained in the introduction, we essentially want the computer to be smarter about how it checks for intersects with rays and surfaces. Put simply, we don’t want it checking through all the surfaces, only the ones which the ray might hit. So how do we do that? Spatial hashing! Spatial hashing is where we divide the screen into smaller segments, work out which segment(s) (known as ‘buckets’) of the screen each rectangle is in. We then assign each rectangle to the buckets it lies within. Now we do our ray cast as normal, only this time we first work out which bucket our ray is in. Our ray can only ever intersect with the rectangles in that bucket, so here’s the key: we only check for intersects with the rectangles in that particular bucket! That has a tremendous advantage, because before we were looping over 400 rectangle boundaries (100 rectangles) for each ray, whereas now we’re only looping over the rectangles within the bucket the ray is within, which may only be a few!

Now since we’re ray casting, it makes sense to split the screen radially, like this:

Traditional 2D spatial hashing divides the screen into boxes, but this way is simple. You can try dividing the screen into boxes instead, and working out what boxes the light has influence in, and only looping over the rectangles in those boxes, but I’ll leave that up to you.

So, on to the “how”. First, we’ll need a new class. We’ll start with a constructor and a few basic members:

```
class RadialSpatialHasher
{
Point origin; //origin of the buckets (should match position of light)
int n; //how many buckets we want
double stopang; //leave as 2*PI for now, we want a full circle
double bucketsize; //we can work this out in our constructor (stopang/n)
List
```

Now we need a way to work out which bucket a particular angle falls into

```
public int GetBucketID(double ang)
{
//we need to implement some sort of "fall through" so that if the angle is -ve, we
//don't return a negative index because they don't exist! So find the equivalent positive one
//if the angle is -ve by subtracting the absolute of the angle from 2*PI
if (ang >= 0)
return (int)Math.Floor(ang / bucketsize);
else
return (int)Math.Floor( ((Math.PI*2) - Math.Abs(ang)) / bucketsize);
}
```

The reason we must check for a -ve angle is that there are no negative elements in an array, and there are no negative buckets! So if the angle is negative, we simply work out which bucket the angle falls into by working out the equivalent positive angle (2*PI – Abs(angle)).

Next, we need to have some way of working out which buckets a rectangle is in. Imagine a rectangle so large that it spans across 2 or more buckets, so we need to work out which buckets it falls into.

```
public List GetBucketsInRange(double minang, double maxang)
{
int startbucket = GetBucketID(minang);
int endbucket = GetBucketID(maxang);
List buckets = new List();
for (int i = startbucket; i <= endbucket; i++)
{
buckets.Add(i);
}
return buckets;
}
```

This method just returns a list of bucket IDs starting at one angle and ending at another (larger) angle. Should be easy enough!

Now the slightly harder part: we need to have a way of inserting the rectangles into all the buckets they fall into. On first thought, this sounds easy, but it's a little tricky if the rectangle overlaps the 0-360 boundary. We first need to find out the maximum angle of our rectangle, and the minimum angle. That is, the vertex with the largest angle from horizontal and the vertex with the smallest. That is a little tricky, but I'll explain later (see the appendix). For now, lets handle the adding of the rectangle to the buckets assuming we already know the minimum/maximum angle of the rectangle:

```
public void AddToBuckets(object newObject, double minang, double maxang)
{
List bucketIDs;
if (minang < 0)
{
bucketIDs = new List();
//if the object has overlapped the boundary: use the absolute value of the minimum angle
//work out how many segments this correlates to and fill buckets from
//(n - (number of segments we just worked out) to (n) AND segments from 0 to max angle
double tempang = Math.Abs(minang);
int bucketCount = GetBucketID(tempang);
for (int i = ( (n-1) - bucketCount); i <= (n-1); i++)
bucketIDs.Add(i);
//now we handle everything from 0 to the max angle
int posBuckets = GetBucketID(maxang);
for (int i = 0; i <= posBuckets; i++)
bucketIDs.Add(i);
}
else
{
bucketIDs = GetBucketsInRange(minang, maxang);
}
foreach (int bucketID in bucketIDs)
{
if (buckets[bucketID] == null)
buckets[bucketID] = new List
```

As you see above, if the minimum angle of the rectangle is positive, it's easy (just use the GetBucketsInRange method we made earlier). However, if the minimum angle is less than 0, I.E this rectangle overlaps the 0-360 boundary, we need to do some thinking. First of all, treat the minimum angle as if it were positive, I.E take the absolute value. Now work out which bucket that would fall into, assuming the angle is positive. Now we just subtract that many buckets from the maximum bucket (n-1) and fill in the budkets between that. Exmaple: -0.5 radians -> imagine 0.5 radians falls in bucket 1. So we put this rectangle in buckets from (n-1) - 1, to (n-1) (I.E the last two buckets) and then we fill in from bucket 0 to the max angle, as usual. A little confusing, but read it enough time and it should make sense.

Finally, we want a function to get all the rectangles in a given bucket.

```
public List
```

We just do some safety checks to ensure this bucket ID is sensible, and return the bucket with this ID.

Now we just modify the code from the Light class in my previous blog post:

```
public static void RenderLights(List lights, List rectangles, int n)
{
foreach (Light light in lights)
{
//make a hash at the position of this light, with a max angle of 2*PI
RadialSpatialHasher hash = new RadialSpatialHasher(light.pos.X,
light.pos.Y, n, Math.PI * 2);
foreach( RenRectangle rectangle in rectangles)
{
//add this rectangle to the hash, by working out the min/max angle of this rectangle from the light source
hash.AddToBuckets(rectangle, rectangle.MinAngle(light.pos.X,
light.pos.Y),rectangle.MaxAngle(light.pos.X,
light.pos.Y));
}
GL.Begin(BeginMode.TriangleFan);
GL.Color4(light.color.R, light.color.G, light.color.B, light.originalpha);
GL.Vertex2(light.pos.X, light.pos.Y); //central point
int startang = 0;
int maxang = 360;
//count in degrees, do for all 360 degrees
for (int i = startang; i <= maxang; i = i + light.anglestep)
{
if ((i == startang + 90) || (i == startang + 270))
continue;
float angle = (float)Math.PI * i / 180; //convert to radians
float dx = (float)Math.Cos(angle); //unit vector in x direction
float dy = (float)Math.Sin(angle); //unit vector in y direction
float t = light.size; //scalar distance of ray
//Here's the key part: we only loop for all the rectangles in this bucket!
//We get the bucket this ray is in by doing hash.GetBucketID(angle) where angle
//is the angle of this particular ray
foreach (RenRectangle rectangle in hash.GetBucket(hash.GetBucketID(angle)))
{
Line[] bounds = rectangle.GetBounds();
foreach (Line bound in bounds)
{
Intercept intercept = light.GetIntercept(bound, i, t);
if (intercept.Hit)
t = intercept.Distance;
}
}
float alphascale = light.originalpha * (t / light.size);
GL.Color4(light.color.R, light.color.G, light.color.B,
light.originalpha - alphascale);
GL.Vertex2(light.pos.X + dx * t, light.pos.Y + dy * t);
}
GL.End();
}
}
```

The comments explain the modified parts: if there's something you don't understand, take a look at my previous blog post. Hopefully now you'll have a basic understanding of how to make a radial spatial hash. You can experiment with the n parameter, to make fewer or more buckets. I found that hundreds of buckets works "bucket loads" better, but you can experiemnt for yourself by using the StopWatch class and finding out how long it takes to run the function. Good luck!

Appendix:

One final bit of code to help you: finding the min/max angle of all points in a rectangle. You'll need this when adding the rectangle to the hash. You can add this to your Rectangle class from the previous tutorial.

```
public double MaxAngle(float originX, float originY)
{
double maxang = 0;
bool overlap = false;
if (pos.Y >= originY && pos.Y - height < originY)
overlap = true;
foreach (Line bound in GetBounds())
{
foreach (Point endpoint in bound.GetEndpoints())
{
double newang = Vector2.AngleFromZero(originX, originY,
endpoint.X, endpoint.Y, overlap);
if (newang > maxang)
maxang = newang;
}
}
return maxang;
}
public double MinAngle(float originX, float originY)
{
double minang = (float)(2*Math.PI);
bool overlap = false;
if (pos.Y >= originY && pos.Y - height < originY)
overlap = true;
foreach (Line bound in GetBounds())
{
foreach (Point endpoint in bound.GetEndpoints())
{
double newang = Vector2.AngleFromZero(originX, originY, endpoint.X, endpoint.Y, overlap);
if (newang < minang)
minang = newang;
}
}
return minang;
}
```

The following code can be added to your Vector class. It is used ot work out the angle from one point to the other, and also takes into account if the shape overlaps the 0-360 boundary. A little bit of trigonometry will go far, here.

```
public static double AngleFromZero(float originX, float originY, float endX, float endY, bool overlap)
{
float dx = endX - originX;
float dy = endY - originY;
if (dx > 0)
{
//we're in the right-hand plane
if (dy > 0)
{
//we're in the top-right-hand plane
return Math.Atan(Math.Abs(dy / dx));
}
else
{
//we're in the bottom-right-hand plane
if (!overlap)
return ((2*Math.PI) - Math.Atan(Math.Abs(dy / dx)));
else
return (- Math.Atan(Math.Abs(dy / dx)));
}
}
else
{
if (dy > 0)
{
//we're in the top-left-hand plane
return (Math.PI - Math.Atan(Math.Abs(dy / dx)));
}
else
{
//we're in the bottom-left-hand plane
return (Math.PI + Math.Atan(Math.Abs(dy / dx)));
}
}
}
```

]]>I’ve decided to tackle the challenge of 2D shadows and lighting first in my to-be 2D game. It’s something I’ve wanted to try for a long time, and never really had the guts to try. It seems like a daunting task, but after a few days of work, I’ve produced a basic lighting system which can be extended to any polygon. It doesn’t use the best algorithm in the world, however it’s a start, and it’s something I can build on and improve later. Performance-wise, it’s not terrible. It can handle 400 2D lines (100 rectangles) at 20FPS and one light source. Here’s a video of it in action:

The above video demonstrates a slightly more advanced multi-light system. Click “read more” to read the details of how it works, how to make it, and the source code.

**[WARNING: This method is terribly inefficient. Please read some of the more recent articles for better methods. Feel free to read this to understand the basics, though.]**

So, diving in to the technical details:

The general approach here is to define a point from which rays (of light) will extend, up to a defined maximum length. We extend a ray at every angle (0-360) from this point of light. We do a check with every ‘line’ of every polygon in the area and see if this ray intersects this line. If it does intersect this line, limit the length of the ray to the value of the distance from the point of light to the point of intersection.

As shown in the picture, only for all the angles in between my drawn ‘rays’ (and for the full 360 degrees)

Now, obviously if we tell OpenGL to draw only at these rays we’re going to end up with something that looks like a sea urchin rather than a source of light. So, we solve this by using the GL_TRIANGLE_FAN mode. Assuming the first vertex you specify is the location of the light source, you then specify all of the intersection points we figured out above as the following vertices. That way, we fill in the space between the rays with our light, while excluding the area behind the objects!

The red line represents the shape that OpenGL will draw as we specify all the vertices. Obviously with more rays, we’ll get a better result that doesn’t overlap the objects we’re trying to draw shadows around, but you get the idea from the picture (hopefully).

So now we know what to do, how do we do it? I’ll post the source code as we go along. First, we need to define some basic classes which will help us keep track of everything. It is reasonable to make a Rectangle class, because we’re using rectangles frequently. We’ll give it a position, a width and a height, for now. We’ll also want a way to draw the rectangles easily, so we’ll add a public function for that too.

```
class RenRectangle
{
Point pos;
public float width;
public float height;
Color4 color = Color4.Blue;
public float Width
{
get
{
return width;
}
}
public float Height
{
get
{
return height;
}
}
public Point Pos
{
get
{
return pos;
}
}
public RenRectangle(Point pos, float width, float height)
{
this.pos = pos;
this.width = width;
this.height = height;
}
public void Render()
{
//set OpenGL to draw a series of lines, specified by a chain of points
GL.Begin(BeginMode.Quads);
GL.Color4(color.R, color.G, color.B, 1f);
//specify the points of the rectangle
GL.Vertex2(pos.X, pos.Y);
GL.Vertex2(pos.X + width, pos.Y);
GL.Vertex2(pos.X + width, pos.Y - height);
GL.Vertex2(pos.X, pos.Y - height);
GL.End();
}
public Line[] GetBounds()
{
Line[] bounds = new Line[4];
bounds[0] = new Line(pos.X, pos.Y, pos.X + width, pos.Y); //top
bounds[1] = new Line(pos.X, pos.Y, pos.X, pos.Y - height); //left
bounds[2] = new Line(pos.X + width, pos.Y, pos.X + width, pos.Y - height); //right
bounds[3] = new Line(pos.X + width, pos.Y - height, pos.X, pos.Y - height); //bottom
return bounds;
}
}
```

The RenRectangle class (odd name so as not to conflict with Windows Drawing Rectangle class) has a position, specified with a Point object. This is a basic class, again, which simply holds an X and a Y value.

```
class Point
{
public float X, Y;
public Point()
{
X = Y = 0;
}
public Point(float x, float y)
{
this.X = x;
this.Y = y;
}
}
```

One final thing you may have noticed in the RenRectangle class, the GetBounds function. Rectangles are made up of 4 lines, by definition, so we have a function which generates and returns a list of those lines. You’ll see where these lines come in later, but lets introduce the Lines class (again, very basic)

```
class Line
{
public float X, Y, endX, endY;
public Line(float x, float y, float endx, float endy)
{
this.X = x;
this.Y = y;
this.endX = endx;
this.endY = endy;
}
}
```

Great, so we are almost ready to get going with the fun stuff. I will introduce the final fundamental class at this point, the Vector class. This is a little more involved, but not difficult to understand. Anyone who has studied vectors before should get the idea

```
class Vector2
{
float x, y;
public float X
{
get
{
return x;
}
}
public float Y
{
get
{
return y;
}
}
public Vector2(float x, float y)
{
this.x = x;
this.y = y;
}
public float Modulus()
{
return (float)Math.Sqrt(Math.Pow(x, 2) + Math.Pow(y, 2));
}
public Vector2 Unit()
{
float mod = this.Modulus();
return new Vector2(x/mod, y/mod);
}
public static Vector2 operator +(Vector2 A, Vector2 B)
{
return new Vector2(A.X + B.X, A.Y + B.Y);
}
public static Vector2 operator -(Vector2 A, Vector2 B)
{
return new Vector2(A.X - B.X, A.Y - B.Y);
}
public static Vector2 operator /(Vector2 A, float div)
{
return new Vector2(A.X / div, A.Y / div);
}
public static Vector2 operator *(Vector2 A, float mult)
{
return new Vector2(A.X * mult, A.Y * mult);
}
}
```

The Vector2 class has some operator overrides, which simply let us do things like multiplication of a float by a vector, for example. Very handy!

If you haven’t studied vectors, you should look into those before continuing. At this point we’ll lose you if you don’t understand the basics of vectors!

Right, the fun stuff. We need some vector maths.

As we said above, we need to be able to find the intersection point of 2 lines. A vector can be defined like this:

B is the starting point of the vector, D is the unit vector in the direction of the vector, and t is the scalar multiplication factor (remember: t: how far in the direction of the unit vector do we go?)

So, we want to find out where the vector of the ray meets the vector of the line of a rectangle, and we want to do that for ALL lines of ALL rectangles for ALL rays of light (0-360).

So, lets define one vector for the ray of light:

A is the starting position (the origin of our ray of light) and E is our unit direction vector for the ray

Now lets define our vector for the line of a side of a rectangle:

S is the starting point of our line, D is the unit direction vector of the line.

These two lines intersect when:

Which, splitting into i and j components gives:

We want to know the value of tR, so lets rearrange and solve for tR:

After some manipulation:

where

Still with me? Great! Now we just put all that we’ve said above in code.

First, we make a struct to pass information around this class in a flexible manner:

```
struct Intercept
{
public float Distance;
public bool Hit;
public Intercept(float distance, bool hit)
{
this.Hit = hit;
this.Distance = distance;
}
}
```

It simply stores information about the distance and if the lines do indeed intersect. So, on to the fun part: the light class.

What we’ll do is make a class which has a number of properties relevant to the light source defined through a constructor:

```
class Light
{
bool IsInRange(float a, float b, float testpoint)
{
if (a > b)
{
if (testpoint >= b && testpoint <= a)
{
return true;
}
}
else
{
if (testpoint >= a && testpoint <= b)
{
return true;
}
}
return false;
}
bool HitTestBound(Point min, Point max, Point point)
{
return IsInRange(min.X, max.X, point.X) && IsInRange(min.Y, max.Y, point.Y);
}
protected Point pos;
protected Color4 color;
protected float size;
protected float originalpha = 0.2f;
protected float direction = 0;
protected float width = 361;
protected int anglestep = 1;
protected bool dynamicflag = false;
public Light(Point pos, Color4 lightColor, float size)
{
this.pos = pos;
this.color = lightColor;
this.size = size;
}
public void SetPos(Point newPos)
{
pos = newPos;
}
```

Here we simply define a constructor, some properties of the light, and a few useful functions we'll use later. Should be fairly straightforward.

Next, we need to define the Render function, which will render light given a list of RenRectangle objects. It will then iterate from startang to maxang (0 to 360 degrees, in this case) after specifying the first Vertex for our light at the position of the light

```
public void Render(List<RenRectangle> rectangles)
{
//set to triangle fan, first point acts as the central point
//from which the "fan" extends. Following points describe
//the arc of the fan
GL.Begin(BeginMode.TriangleFan);
GL.Color4(color.R, color.G, color.B, originalpha);
GL.Vertex2(pos.X, pos.Y); //central point
int startang = (int)(direction - (width / 2));
int maxang = (int)(direction + (width / 2));
//count in degrees, do for all 360 degrees
for (int i = startang; i <= maxang; i = i + anglestep)
```

Great, so what we do next will be done for each of the rays of light. First, it's a good idea to convert to radians, so we can use the Math library for Cos and Sin. We'll use that to work out the direction vector of the ray of light, which is a unit vector in the x or y direction

```
//convert to radians
float angle = (float)Math.PI * i / 180;
//unit vector in x direction
float dx = (float)Math.Cos(angle);
//unit vector in y direction
float dy = (float)Math.Sin(angle);
//scalar distance of ray
float t = size;
```

At this point is it important to note the variable t. Recall from above, t is the distance (magnitude) of the vector from the origin of the light to the intersect of the ray of light with the 'boundary' line of a rectangle (see fig. 2 above) which, if it doesn't intersect anything, is simply the max size of the light. So we set t = size, to start with, but t may change if we later find out the ray intersects with a boundary of a rectangle. So, how do we find that out? Well, we already did the maths above, we simply check all the boundaries (lines) of all our rectangles to see if they intersect with this particular ray!

```
foreach (RenRectangle rectangle in rectangles)
{
Line[] bounds = rectangle.GetBounds();
foreach (Line bound in bounds)
{
Intercept intercept = GetIntercept(bound, i, t);
if (intercept.Hit)
t = intercept.Distance;
}
}
```

We'll define GetIntercept later, but for now just try to understand what is happening here. We're setting t to the lowest possible value of the distance between the intercept point (of ANY line of any rectangle) and the origin of the light. So that means that if this ray of light intersects a boundary, we get the effect shown in figure 2 above, where the actual drawn 'light' is 'blocked' at this distance, and does not get drawn over the rectangle or beyond it. That gives the illusion of shadows! Now we need to let OpenGL know we want to draw a Vertex here, so we do the following to finish up:

```
float alphascale = originalpha * (t / size);
GL.Color4(color.R, color.G, color.B, originalpha - alphascale);
GL.Vertex2(pos.X + dx * t, pos.Y + dy * t);
}
GL.End();
}
```

The alphascale simply gives that 'fade out' effect, so that as the ray extends, the light looks more diffused.

Now we need to define that GetIntercept function I mentioned and used earlier, this is where that vector maths comes in:

```
protected Intercept GetIntercept(Line bound, float degang, float t)
{
float angle = (float)Math.PI * degang / 180;
//first calculate unit vectors of bound
Vector2 S = new Vector2(bound.X, bound.Y); //start of bound
Vector2 ES = new Vector2(bound.endX, bound.endY); //end of bound
Vector2 SP = ES - S;
//unit vector of bound
Vector2 D = SP / (float)(Math.Sqrt(Math.Pow(ES.X - S.X, 2)
+ Math.Pow(ES.Y - S.Y, 2)));
//now calculate unit vectors of ray
//origin of ray
Vector2 A = new Vector2(pos.X, pos.Y);
//x component of unit vector of ray
float Ex = (float)Math.Cos(angle);
//y component of unit vector of ray
float Ey = (float)Math.Sin(angle);
//now calculate t of bound line
float tb = ((S.X * Ey) - (A.X * Ey) + (A.Y * Ex)
- (S.Y * Ex)) / ((D.Y * Ex) - (D.X * Ey));
//now we can find t of ray
float tr = ((S.X) + (tb * D.X) - (A.X)) / Ex;
//now we find the intersect by substituting back
Vector2 intersect = S + (D * tb);
```

After building the vectors we defined in the maths bit beforehand, we use our formula, then define the 'intersect' vector, which is finally the point of interception! Now we must ensure the point of intersection lies on the line (because it's possible that it doesn't, because the vectors go on for infinity) so we simply use our HitTestBound function to check that the intercept lies within the (very thin) box that is the line of the rectangle. If it lies in this box, it lies on the line of the rectangle, and thus it intercepts at this point.

```
Point intersectpoint = new Point(intersect.X, intersect.Y);
Point startBox = new Point(bound.X, bound.Y);
Point endBox = new Point(bound.endX, bound.endY);
if (HitTestBound(startBox, endBox, intersectpoint) && tr <= t && tr >= 0.0f)
return new Intercept(tr, true);
else
return new Intercept(tr, false);
}
```

We return an Intercept structure (for future flexibility) with the new value of tR, which was what we set out to find.

So now we have that, it's time to put it to good use.

In our main program, we can generate some random rectangles and a light:

```
int n = 100;
rectangles = new List<RenRectangle>();
light = new Light(new Point(50, 50), Color4.White, 200f);
Random rand = new Random();
for (int i = 0; i < n; i++)
{
float newx = (float)rand.NextDouble() * rand.Next(0, ClientRectangle.Width);
float newy = (float)rand.NextDouble() * rand.Next(0, ClientRectangle.Height);
float newwidth = (float)rand.NextDouble() * rand.Next(50, 100);
float newheight = (float)rand.NextDouble() * rand.Next(50, 100);
rectangles.Add(new RenRectangle(new Point(newx, newy), newwidth, newheight));
}
```

Now we can add the following to our OnRenderFrame function override:

```
foreach (RenRectangle rectangle in rectangles)
{
rectangle.Render();
}
light.Render(rectangles);
```

And there you have it, you should be able to piece the rest of it together if you've a bit of experience with OpenGL and OpenTK. It simply involves taking care of viewports and setting up the screens, but that's out of the realm of this tutorial, and can be found with a quick google search! Extend functionality by making a list of lights, and rendering each light in a foreach loop, as we did with the rectangles.

This system is rather inefficient, but it gives a starting point for ray casting and 2D shadows. In following tutorials I will explain the dynamic lighting you see in the video. In tutorials after that, I will explain how to optimise the system further (after I've gained enough understanding myself!)