source: liacs/nc/low-correlation/report.tex@ 62

Last change on this file since 62 was 51, checked in by Rick van der Zwet, 15 years ago

All completed versions

File size: 5.5 KB
Line 
1%
2% $Id: report.tex 571 2008-04-20 17:31:04Z rick $
3%
4
5\documentclass[12pt,a4paper]{article}
6
7\usepackage{listings}
8
9\frenchspacing
10\usepackage[english,dutch]{babel}
11\selectlanguage{dutch}
12\usepackage{graphicx}
13\usepackage{url}
14\usepackage{multicol}
15\usepackage{fancybox}
16\usepackage{amssymb,amsmath}
17\usepackage{float}
18\usepackage{color}
19\usepackage{subfig}
20\usepackage{marvosym}
21\floatstyle{ruled}
22\newfloat{result}{thp}{lop}
23\floatname{result}{Result}
24
25\input{highlight.sty}
26
27\title{The low-autocorrelation problem\\
28\large{Practical Assignments Natural Computing, 2009}}
29\author{Rick van der Zwet\\
30 \texttt{<hvdzwet@liacs.nl>}}
31\date{\today}
32
33
34\begin{document}
35\newcommand{\wekacmd}[1]{\begin{quote}\small{\texttt{#1}}\end{quote}}
36\newcommand{\unixcmd}[1]{\begin{quote}\small{\texttt{#1}}\end{quote}}
37
38
39\maketitle
40
41\section{Introduction}
42The report is focused on the so-called \emph{low-autocorrelation problem of
43binary sequences}, is subject to actual research and is of big interest for
44industrial applications, e.g. communications and electrical engineering. Its
45description goes as follows.
46
47\textbf{Feasible Solutions:} Binary Sequences $\vec{y} \in \{-1,+1\}^n$
48
49\textbf{Objective Function:}
50\begin{equation}
51f(\vec{y}) = \frac{n^2}{2 \cdot E(\vec{y})} \longrightarrow maximization
52\end{equation}
53
54\begin{equation}
55E(\vec{y}) = \displaystyle\sum_{k=1}^{n-1} (\sum_{i=1}^{n-k} y_i \cdot y_{i+k})^2
56\end{equation}
57
58\section{Problem description}
59Due to the huge 'exploding' possibilities it is not possible to walk trough the
60whole list of possibilities, so we need alternative approaches to tackle this
61problem. First we will try the \emph{Monte Carlo Search algorithm [MCS]} and next try
62\emph{Simulated Annealing [SA]}.
63
64\emph{MCS} is all about random sequence generation and trying to find a good
65solution, do a small adjustment on the solution and compare the new solution
66again.
67
68\emph{SA} takes a more and less the same approach, but it also accept small
69'losses' from time-to-time. But as time passes it get less likely to accept
70'bad solutions'.
71
72\section{Statistics}
73Of course many people have run numerous computer hours on finding the best
74possible fitness as shown in table~\ref{tab:best}. The algorithms used to find
75those numbers are not found.
76
77\begin{table}
78 \caption{Best known values of low-autocorrelation problem}
79 \begin{center}
80 \begin{tabular}{| l | c | r | }
81 \hline
82 $n$ & Best known $f$ \\
83 \hline \hline
84 20 & 7.6923 \\ \hline
85 50 & 8.1699 \\ \hline
86 100 & 8.6505 \\ \hline
87 200 & 7.4738 \\ \hline
88 222 & 7.0426 \\ \hline
89 \end{tabular}
90 \end{center}
91 \label{tab:best}
92\end{table}
93
94\section{Approach}
95The \emph{MCS} is implemented straight-forward from the Wikipedia
96page\footnote{http://en.wikipedia.org/wiki/Monte\_Carlo\_method}. The mutation
97choosing is flipping one bit in the array.
98
99For the \emph{SA} implemented also comes straight from 'the book'
100\footnote{http://en.wikipedia.org/wiki/Simulated\_annealing}, the process
101choosing for the cooling-down sequence is taken from the dutch Wikipedia page
102'Simulated annealing', which is nice indicator when to choose something when
103the solution is worse (logically, better solutions will always be accepted).
104\footnote{http://nl.wikipedia.org/wiki/Simulated\_annealing}:
105\begin{math}
106p = e^{\frac{f(i)-f(j)}{c}}
107\end{math}
108
109\section{Implementation}
110The code is written in
111\emph{Octave}\footnote{http://www.gnu.org/software/octave/} which is the
112open-source 'variant' of \emph{MATLAB}\copyright
113\footnote{http://www.mathworks.com/products/matlab/}. There are small minor
114differences between them, but all code is made compatible to to run on both
115systems. The code is to be found in Appendix~\ref{app:code}.
116
117As work is done remotely, the following commands are used:
118\unixcmd{matlab-bin -nojvm -nodesktop -nosplash -nodisplay < \%\%PROGRAM\%\%}
119\unixcmd{octave -q \%\%PROGRAM\%\%}
120
121\section{Results}
122All experiments are run 5 times the best solution is chosen and will be
123resented at table~\ref{tab:result}. Iteration size is set to 1000. For $n=20$
124the best fitness history is shown at figure~\ref{fig:fitness}.
125
126\begin{table}
127 \caption{Best known values of low-autocorrelation problem}
128 \begin{center}
129 \begin{tabular}{| r | r | r | r | }
130 \hline
131 $n$ & \multicolumn{3}{|c|}{Best fitness} \\
132 & known & MCS & SA \\
133 \hline \hline
134 20 & 7.6923 & 4.3478 & 4.7619 \\ \hline
135 50 & 8.1699 & 2.4752 & 2.6882 \\ \hline
136 100 & 8.6505 & 1.8342 & 1.7470 \\ \hline
137 200 & 7.4738 & 1.8678 & 1.5733 \\ \hline
138 222 & 7.0426 & 1.5657 & 1.4493 \\ \hline
139 \end{tabular}
140 \label{tab:result}
141 \end{center}
142\end{table}
143
144\begin{figure}[htp]
145 \begin{center}
146 \subfloat[SA]{\label{fig:edge-a}\includegraphics[scale=0.4]{sa-fitness.eps}}
147 \subfloat[MCS]{\label{fig:edge-b}\includegraphics[scale=0.4]{mcs-fitness.eps}}
148 \end{center}
149 \caption{Fitness throughout the iterations}
150 \label{fig:fitness}
151\end{figure}
152
153\section{Conclusions}
154Looking at the graphs the fitness was still increasing so a larger iteration
155size would make the fitness a better result. Secondly the \emph{SA} is
156preforming much worse on large number than the \emph{MCS} one. It seems to the
157temperature function is not working as expected.
158
159Both algorithms are preforming much worse then the best found solutions.
160\section{Appendix 1}
161\label{app:code}
162\include{autocorrelation.m}
163\include{initseq.m}
164\include{mcs-call.m}
165\include{mcs.m}
166\include{mutation.m}
167\include{randint.m}
168\include{sa-call.m}
169\include{sa.m}
170\end{document}
Note: See TracBrowser for help on using the repository browser.