1 | %
|
---|
2 | % $Id: report.tex 571 2008-04-20 17:31:04Z rick $
|
---|
3 | %
|
---|
4 |
|
---|
5 | \documentclass[12pt,a4paper]{article}
|
---|
6 |
|
---|
7 | \usepackage{listings}
|
---|
8 |
|
---|
9 | \frenchspacing
|
---|
10 | \usepackage[english,dutch]{babel}
|
---|
11 | \selectlanguage{dutch}
|
---|
12 | \usepackage{graphicx}
|
---|
13 | \usepackage{url}
|
---|
14 | \usepackage{multicol}
|
---|
15 | \usepackage{fancybox}
|
---|
16 | \usepackage{amssymb,amsmath}
|
---|
17 | \usepackage{float}
|
---|
18 | \usepackage{color}
|
---|
19 | \usepackage{subfig}
|
---|
20 | \usepackage{marvosym}
|
---|
21 | \floatstyle{ruled}
|
---|
22 | \newfloat{result}{thp}{lop}
|
---|
23 | \floatname{result}{Result}
|
---|
24 |
|
---|
25 | \input{highlight.sty}
|
---|
26 |
|
---|
27 | \title{The low-autocorrelation problem\\
|
---|
28 | \large{Practical Assignments Natural Computing, 2009}}
|
---|
29 | \author{Rick van der Zwet\\
|
---|
30 | \texttt{<hvdzwet@liacs.nl>}}
|
---|
31 | \date{\today}
|
---|
32 |
|
---|
33 |
|
---|
34 | \begin{document}
|
---|
35 | \newcommand{\wekacmd}[1]{\begin{quote}\small{\texttt{#1}}\end{quote}}
|
---|
36 | \newcommand{\unixcmd}[1]{\begin{quote}\small{\texttt{#1}}\end{quote}}
|
---|
37 |
|
---|
38 |
|
---|
39 | \maketitle
|
---|
40 |
|
---|
41 | \section{Introduction}
|
---|
42 | The report is focused on the so-called \emph{low-autocorrelation problem of
|
---|
43 | binary sequences}, is subject to actual research and is of big interest for
|
---|
44 | industrial applications, e.g. communications and electrical engineering. Its
|
---|
45 | description goes as follows.
|
---|
46 |
|
---|
47 | \textbf{Feasible Solutions:} Binary Sequences $\vec{y} \in \{-1,+1\}^n$
|
---|
48 |
|
---|
49 | \textbf{Objective Function:}
|
---|
50 | \begin{equation}
|
---|
51 | f(\vec{y}) = \frac{n^2}{2 \cdot E(\vec{y})} \longrightarrow maximization
|
---|
52 | \end{equation}
|
---|
53 |
|
---|
54 | \begin{equation}
|
---|
55 | E(\vec{y}) = \displaystyle\sum_{k=1}^{n-1} (\sum_{i=1}^{n-k} y_i \cdot y_{i+k})^2
|
---|
56 | \end{equation}
|
---|
57 |
|
---|
58 | \section{Problem description}
|
---|
59 | Due to the huge 'exploding' possibilities it is not possible to walk trough the
|
---|
60 | whole list of possibilities, so we need alternative approaches to tackle this
|
---|
61 | problem. First we will try the \emph{Monte Carlo Search algorithm [MCS]} and next try
|
---|
62 | \emph{Simulated Annealing [SA]}.
|
---|
63 |
|
---|
64 | \emph{MCS} is all about random sequence generation and trying to find a good
|
---|
65 | solution, do a small adjustment on the solution and compare the new solution
|
---|
66 | again.
|
---|
67 |
|
---|
68 | \emph{SA} takes a more and less the same approach, but it also accept small
|
---|
69 | 'losses' from time-to-time. But as time passes it get less likely to accept
|
---|
70 | 'bad solutions'.
|
---|
71 |
|
---|
72 | \section{Statistics}
|
---|
73 | Of course many people have run numerous computer hours on finding the best
|
---|
74 | possible fitness as shown in table~\ref{tab:best}. The algorithms used to find
|
---|
75 | those numbers are not found.
|
---|
76 |
|
---|
77 | \begin{table}
|
---|
78 | \caption{Best known values of low-autocorrelation problem}
|
---|
79 | \begin{center}
|
---|
80 | \begin{tabular}{| l | c | r | }
|
---|
81 | \hline
|
---|
82 | $n$ & Best known $f$ \\
|
---|
83 | \hline \hline
|
---|
84 | 20 & 7.6923 \\ \hline
|
---|
85 | 50 & 8.1699 \\ \hline
|
---|
86 | 100 & 8.6505 \\ \hline
|
---|
87 | 200 & 7.4738 \\ \hline
|
---|
88 | 222 & 7.0426 \\ \hline
|
---|
89 | \end{tabular}
|
---|
90 | \end{center}
|
---|
91 | \label{tab:best}
|
---|
92 | \end{table}
|
---|
93 |
|
---|
94 | \section{Approach}
|
---|
95 | The \emph{MCS} is implemented straight-forward from the Wikipedia
|
---|
96 | page\footnote{http://en.wikipedia.org/wiki/Monte\_Carlo\_method}. The mutation
|
---|
97 | choosing is flipping one bit in the array.
|
---|
98 |
|
---|
99 | For the \emph{SA} implemented also comes straight from 'the book'
|
---|
100 | \footnote{http://en.wikipedia.org/wiki/Simulated\_annealing}, the process
|
---|
101 | choosing for the cooling-down sequence is taken from the dutch Wikipedia page
|
---|
102 | 'Simulated annealing', which is nice indicator when to choose something when
|
---|
103 | the solution is worse (logically, better solutions will always be accepted).
|
---|
104 | \footnote{http://nl.wikipedia.org/wiki/Simulated\_annealing}:
|
---|
105 | \begin{math}
|
---|
106 | p = e^{\frac{f(i)-f(j)}{c}}
|
---|
107 | \end{math}
|
---|
108 |
|
---|
109 | \section{Implementation}
|
---|
110 | The code is written in
|
---|
111 | \emph{Octave}\footnote{http://www.gnu.org/software/octave/} which is the
|
---|
112 | open-source 'variant' of \emph{MATLAB}\copyright
|
---|
113 | \footnote{http://www.mathworks.com/products/matlab/}. There are small minor
|
---|
114 | differences between them, but all code is made compatible to to run on both
|
---|
115 | systems. The code is to be found in Appendix~\ref{app:code}.
|
---|
116 |
|
---|
117 | As work is done remotely, the following commands are used:
|
---|
118 | \unixcmd{matlab-bin -nojvm -nodesktop -nosplash -nodisplay < \%\%PROGRAM\%\%}
|
---|
119 | \unixcmd{octave -q \%\%PROGRAM\%\%}
|
---|
120 |
|
---|
121 | \section{Results}
|
---|
122 | All experiments are run 5 times the best solution is chosen and will be
|
---|
123 | resented at table~\ref{tab:result}. Iteration size is set to 1000. For $n=20$
|
---|
124 | the best fitness history is shown at figure~\ref{fig:fitness}.
|
---|
125 |
|
---|
126 | \begin{table}
|
---|
127 | \caption{Best known values of low-autocorrelation problem}
|
---|
128 | \begin{center}
|
---|
129 | \begin{tabular}{| r | r | r | r | }
|
---|
130 | \hline
|
---|
131 | $n$ & \multicolumn{3}{|c|}{Best fitness} \\
|
---|
132 | & known & MCS & SA \\
|
---|
133 | \hline \hline
|
---|
134 | 20 & 7.6923 & 4.3478 & 4.7619 \\ \hline
|
---|
135 | 50 & 8.1699 & 2.4752 & 2.6882 \\ \hline
|
---|
136 | 100 & 8.6505 & 1.8342 & 1.7470 \\ \hline
|
---|
137 | 200 & 7.4738 & 1.8678 & 1.5733 \\ \hline
|
---|
138 | 222 & 7.0426 & 1.5657 & 1.4493 \\ \hline
|
---|
139 | \end{tabular}
|
---|
140 | \label{tab:result}
|
---|
141 | \end{center}
|
---|
142 | \end{table}
|
---|
143 |
|
---|
144 | \begin{figure}[htp]
|
---|
145 | \begin{center}
|
---|
146 | \subfloat[SA]{\label{fig:edge-a}\includegraphics[scale=0.4]{sa-fitness.eps}}
|
---|
147 | \subfloat[MCS]{\label{fig:edge-b}\includegraphics[scale=0.4]{mcs-fitness.eps}}
|
---|
148 | \end{center}
|
---|
149 | \caption{Fitness throughout the iterations}
|
---|
150 | \label{fig:fitness}
|
---|
151 | \end{figure}
|
---|
152 |
|
---|
153 | \section{Conclusions}
|
---|
154 | Looking at the graphs the fitness was still increasing so a larger iteration
|
---|
155 | size would make the fitness a better result. Secondly the \emph{SA} is
|
---|
156 | preforming much worse on large number than the \emph{MCS} one. It seems to the
|
---|
157 | temperature function is not working as expected.
|
---|
158 |
|
---|
159 | Both algorithms are preforming much worse then the best found solutions.
|
---|
160 | \section{Appendix 1}
|
---|
161 | \label{app:code}
|
---|
162 | \include{autocorrelation.m}
|
---|
163 | \include{initseq.m}
|
---|
164 | \include{mcs-call.m}
|
---|
165 | \include{mcs.m}
|
---|
166 | \include{mutation.m}
|
---|
167 | \include{randint.m}
|
---|
168 | \include{sa-call.m}
|
---|
169 | \include{sa.m}
|
---|
170 | \end{document}
|
---|