source: liacs/nc/low-correlation/report.tex@ 47

Last change on this file since 47 was 46, checked in by Rick van der Zwet, 15 years ago

Report finished = sleep

File size: 5.5 KB
Line 
1%
2% $Id: report.tex 571 2008-04-20 17:31:04Z rick $
3%
4
5\documentclass[12pt,a4paper]{article}
6
7\usepackage{listings}
8
9\frenchspacing
10\usepackage[english,dutch]{babel}
11\selectlanguage{dutch}
12\usepackage{graphicx}
13\usepackage{url}
14\usepackage{multicol}
15\usepackage{fancybox}
16\usepackage{amssymb,amsmath}
17\usepackage{float}
18\usepackage{color}
19\usepackage{subfig}
20\floatstyle{ruled}
21\newfloat{result}{thp}{lop}
22\floatname{result}{Result}
23
24\input{highlight.sty}
25
26\title{The low-autocorrelation problem\\
27\large{Practical Assignments Natural Computing, 2009}}
28\author{Rick van der Zwet\\
29 \texttt{<hvdzwet@liacs.nl>}}
30\date{\today}
31
32
33\begin{document}
34\newcommand{\wekacmd}[1]{\begin{quote}\small{\texttt{#1}}\end{quote}}
35\newcommand{\unixcmd}[1]{\begin{quote}\small{\texttt{#1}}\end{quote}}
36
37
38\maketitle
39
40\section{Introduction}
41The report is focused on the so-called \emph{low-autocorrelation problem of
42binary sequences}, is subject to actual research and is of big interest for
43industrial applications, e.g. communications and electrical engineering. Its
44description goes as follows.
45
46\textbf{Feasible Solutions:} Binary Sequences $\vec{y} \in \{-1,+1\}^n$
47
48\textbf{Objective Function:}
49\begin{equation}
50f(\vec{y}) = \frac{n^2}{2 \cdot E(\vec{y})} \longrightarrow maximization
51\end{equation}
52
53\begin{equation}
54E(\vec{y}) = \displaystyle\sum_{k=1}^{n-1} (\sum_{i=1}^{n-k} y_i \cdot y_{i+k})^2
55\end{equation}
56
57\section{Problem description}
58Due to the huge 'exploding' posibilities it is not possible to walk trough the
59whole list of posibities, so we need alternative approches to tackle this
60problem. First we will try the \emph{Monte Carlo Search algoritm [MCS]} and next try
61\emph{Simulated Annealing [SA]}.
62
63\emph{MCS} is all about random sequence generation and trying to find a good
64solution, do a small ajustment on the solution and compare the new solution
65again.
66
67\emph{SA} takes a more and less the same aproch, but it also accept small
68'losses' from time-to-time. But as time passes it get less likely to accept
69'bad solutions'.
70
71\section{Statistics}
72Of course many people have run numerous computer hours on finding the best
73possible fitness as shown in table~\ref{tab:best}. The algoritms used to find
74those numbers are not found.
75
76\begin{table}
77 \caption{Best known values of low-autocorrelation problem}
78 \begin{center}
79 \begin{tabular}{| l | c | r | }
80 \hline
81 $n$ & Best known $f$ \\
82 \hline \hline
83 20 & 7.6923 \\ \hline
84 50 & 8.1699 \\ \hline
85 100 & 8.6505 \\ \hline
86 200 & 7.4738 \\ \hline
87 222 & 7.0426 \\ \hline
88 \end{tabular}
89 \end{center}
90 \label{tab:best}
91\end{table}
92
93\section{Approach}
94The \emph{MCS} is implemented streight-forward from the wikipedia
95page\footnote{http://en.wikipedia.org/wiki/Monte\_Carlo\_method}. The mutation
96choosing is flipping one bit in the array.
97
98For the \emph{SA} implemented also comes streight from 'the book'
99\footnote{http://en.wikipedia.org/wiki/Simulated\_annealing}, the process
100choosing for the cooling-down sequence is taken from the dutch wikipedia page
101'Simulated annealing', which is nice indicator when to choose something when
102the solution is worse (logically, beter solutions will always be accepted).
103\footnote{http://nl.wikipedia.org/wiki/Simulated\_annealing}:
104\begin{math}
105p = e^{\frac{f(i)-f(j)}{c}}
106\end{math}
107
108\section{Implementation}
109The code is written in
110\emph{Octave}\footnote{http://www.gnu.org/software/octave/} which is the
111open-source 'variant' of \emph{MATLAB}\copyright
112\footnote{http://www.mathworks.com/products/matlab/}. There are small minor
113differences between them, but all code is made compatible to to run on both
114systems. The code is to be found in Appendix~\ref{app:code}.
115
116As work is done remotely, the following commands are used:
117\unixcmd{matlab-bin -nojvm -nodesktop -nosplash -nodisplay < \%\%PROGRAM\%\%}
118\unixcmd{octave -q \%\%PROGRAM\%\%}
119
120\section{Results}
121All experiments are run 5 times the best solution is choosen and will be
122resented at table~\ref{tab:result}. Iteration size is set to 1000. For $n=20$
123the best fitness history is shown at figure~\ref{fig:fitness}.
124
125\begin{table}
126 \caption{Best known values of low-autocorrelation problem}
127 \begin{center}
128 \begin{tabular}{| r | r | r | r | }
129 \hline
130 $n$ & \multicolumn{3}{|c|}{Best fitness} \\
131 & known & MCS & SA \\
132 \hline \hline
133 20 & 7.6923 & 4.3478 & 4.7619 \\ \hline
134 50 & 8.1699 & 2.4752 & 2.6882 \\ \hline
135 100 & 8.6505 & 1.8342 & 1.7470 \\ \hline
136 200 & 7.4738 & 1.8678 & 1.5733 \\ \hline
137 222 & 7.0426 & 1.5657 & 1.4493 \\ \hline
138 \end{tabular}
139 \label{tab:result}
140 \end{center}
141\end{table}
142
143\begin{figure}[htp]
144 \begin{center}
145 \subfloat[SA]{\label{fig:edge-a}\includegraphics[scale=0.4]{sa-fitness.eps}}
146 \subfloat[MCS]{\label{fig:edge-b}\includegraphics[scale=0.4]{mcs-fitness.eps}}
147 \end{center}
148 \caption{Fitness throughout the iterations}
149 \label{fig:fitness}
150\end{figure}
151
152\section{Conclusions}
153Looking at the graphs the fitness was still increasing so a larger iteration
154size would make the fitness a better result. Secondly the \emph{SA} is
155preforming much worse on large number than the \emph{MCS} one. It seems to the
156temperature function is not working as expected.
157
158Both algoritms are preforming much worse then the best found solutions.
159\section{Appendix 1}
160\label{app:code}
161\include{autocorrelation.m}
162\include{initseq.m}
163\include{mcs-call.m}
164\include{mcs.m}
165\include{mutation.m}
166\include{randint.m}
167\include{sa-call.m}
168\include{sa.m}
169\end{document}
Note: See TracBrowser for help on using the repository browser.