Changeset 49 for liacs/nc/laser-pulse-shaping
- Timestamp:
- Dec 19, 2009, 5:44:54 PM (15 years ago)
- Location:
- liacs/nc/laser-pulse-shaping
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
liacs/nc/laser-pulse-shaping/latex.mk
r48 r49 58 58 59 59 %.m.tex: %.m 60 highlight --include-style --linenumbers --no-doc \ 61 --latex --input $< --output $@ 60 highlight --linenumbers --no-doc --latex --style-outfile highlight.sty \ 61 --style vim --wrap --line-length 80 --line-number-length 3 \ 62 --input $< --output $@ -
liacs/nc/laser-pulse-shaping/pso.m
r47 r49 5 5 6 6 % Dimention settings 7 parameters = 80;7 parameters = 10; 8 8 9 9 % If global optimum does not change this many steps bail out … … 13 13 14 14 % Flock properties 15 local_swarm_size = 10;16 local_swarms = 50;15 local_swarm_size = 50; 16 local_swarms = 10; 17 17 18 18 %% Particle properties … … 113 113 end 114 114 115 % Dispay hack 116 g_fitness 115 117 g_best 116 118 … … 121 123 grid on; 122 124 legend(sprintf('Parameters %i',parameters)); 123 print(sprintf('pso-fitness-% f.eps', max(fitness_history)),'-depsc2');125 print(sprintf('pso-fitness-%.10f.eps', max(fitness_history)),'-depsc2'); -
liacs/nc/laser-pulse-shaping/report.tex
r48 r49 18 18 \usepackage{color} 19 19 \usepackage{subfig} 20 \usepackage{marvosym} 20 21 \floatstyle{ruled} 21 22 \newfloat{result}{thp}{lop} 22 23 \floatname{result}{Result} 24 25 \input{highlight.sty} 23 26 24 27 \title{Laser Pulse Shaping problem\\ … … 38 41 \section{Introduction} 39 42 The report is focused on the so-called \emph{laser pulse shaping problem}. 40 Today s lasers are also used within the range of atoms or mulecule research.43 Today's lasers are also used within the range of atoms or molecule research. 41 44 Using small pulses it is able to align and alter the movement of the atoms. 42 45 … … 45 48 laser pulse the way it can move the atoms. 46 49 50 To turn and tweak all the 'knobs' at the same time there will be \emph{Particle 51 Swarm Optimizer (PSO)} used to explore the search space. The \emph{PSO} is 52 basically a whole bunch of individual agents which all try to find an optimum 53 into the search space. During this search they get input about other potential 54 bests from the whole swarm (broadcast style) and the neighborhood 55 (observation) and using this values they determine their new location. 56 47 57 \section{Problem description} 58 A laser pulse going through a crystal produces light at the octave of its 59 frequency spectrum. The total energy of the radiated light is proportional to 60 the integrated squared intensity of the primary pulse. Explicitly, the 61 time-dependent profile of the laser field in our simulations is given by: 62 \begin{equation} 63 \label{eq:simulation} 64 E(t) = \int_{-\infty}^\infty A(\omega)exp(i\phi(\omega))exp(i{\omega}t) d\omega, 65 \end{equation} 66 67 where $A(\omega)$ is a Gaussian window function describing the contribution of 68 different frequencies to the pulse and $\phi(\omega)$, the phase function, 69 equips these frequencies with different complex phases. 70 48 71 To determine the best solution a fitness function is needed, which could be 49 72 found in the shape of equation~\ref{eq:fitness} 50 73 \begin{equation} 51 \label{e g:fitness}52 SHG = \int_0^T E^4(t)dt \ rightlongarrow maximization74 \label{eq:fitness} 75 SHG = \int_0^T E^4(t)dt \longrightarrow maximization 53 76 \end{equation} 54 77 55 56 Due to the huge 'exploding' posibilities it is not possible to walk trough the 57 whole list of posibities, so we need alternative approches to tackle this 58 problem. First we will try the \emph{Monte Carlo Search algoritm [MCS]} and next try 59 \emph{Simulated Annealing [SA]}. 60 61 \emph{MCS} is all about random sequence generation and trying to find a good 62 solution, do a small ajustment on the solution and compare the new solution 63 again. 64 65 \emph{SA} takes a more and less the same aproch, but it also accept small 66 'losses' from time-to-time. But as time passes it get less likely to accept 67 'bad solutions'. 68 78 Note that $0 < SHG < 1$ 69 79 \section{Statistics} 70 Of course many people have run numerous computer hours on finding the best71 possible fitness as shown in table~\ref{tab:best}. The algoritms used to find72 those numbers are not found.73 74 \begin{table}75 \caption{Best known values of low-autocorrelation problem}76 \begin{center}77 \begin{tabular}{| l | c | r | }78 \hline79 $n$ & Best known $f$ \\80 \hline \hline81 20 & 7.6923 \\ \hline82 50 & 8.1699 \\ \hline83 100 & 8.6505 \\ \hline84 200 & 7.4738 \\ \hline85 222 & 7.0426 \\ \hline86 \end{tabular}87 \end{center}88 \label{tab:best}89 \end{table}90 80 91 81 \section{Approach} 92 The \emph{MCS} is implemented streight-forward from the wikipedia 93 page\footnote{http://en.wikipedia.org/wiki/Monte\_Carlo\_method}. The mutation 94 choosing is flipping one bit in the array. 95 96 For the \emph{SA} implemented also comes streight from 'the book' 97 \footnote{http://en.wikipedia.org/wiki/Simulated\_annealing}, the process 98 choosing for the cooling-down sequence is taken from the dutch wikipedia page 99 'Simulated annealing', which is nice indicator when to choose something when 100 the solution is worse (logically, beter solutions will always be accepted). 101 \footnote{http://nl.wikipedia.org/wiki/Simulated\_annealing}: 102 \begin{math} 103 p = e^{\frac{f(i)-f(j)}{c}} 104 \end{math} 82 The Wikipedia page 'Particle swarm 83 optimization'~\footnote{http://en.wikipedia.org/wiki/Particle\_swarm\_optimization} 84 contained a reference implementation used to implement the algorithm. The nice 85 part about the algorithm is its flexibility in tuning. As within the \emph{PSO} 86 there are many 'knobs' which could be tweaked as well, like likeliness of 87 heading for the global optimum, neighborhood optimum and local optimum. 105 88 106 89 \section{Implementation} … … 116 99 \unixcmd{octave -q \%\%PROGRAM\%\%} 117 100 101 The flock is represented into a 3d block. A slice of that block contains a 102 local swarm, a column in slice is a individual particle. 103 118 104 \section{Results} 119 All experiments are run 5 times the best solution is choosen and will be 120 resented at table~\ref{tab:result}. Iteration size is set to 1000. For $n=20$ 121 the best fitness history is shown at figure~\ref{fig:fitness}. 105 The program is run against a parameter set of 80. The algorithm is allowed to run 106 for 10 minutes with a maximum of 1000 iterations. If there are no improvements 107 after 5 iterations then it will bail out as well. 122 108 123 \begin{table} 124 \caption{Best known values of low-autocorrelation problem} 125 \begin{center} 126 \begin{tabular}{| r | r | r | r | } 127 \hline 128 $n$ & \multicolumn{3}{|c|}{Best fitness} \\ 129 & known & MCS & SA \\ 130 \hline \hline 131 20 & 7.6923 & 4.3478 & 4.7619 \\ \hline 132 50 & 8.1699 & 2.4752 & 2.6882 \\ \hline 133 100 & 8.6505 & 1.8342 & 1.7470 \\ \hline 134 200 & 7.4738 & 1.8678 & 1.5733 \\ \hline 135 222 & 7.0426 & 1.5657 & 1.4493 \\ \hline 136 \end{tabular} 137 \label{tab:result} 138 \end{center} 139 \end{table} 109 The algorithm is kind of 'social' e.g. it will favor the neighborhood and the 110 global optimum more than it's own local optimum. Also its active, meaning it 111 changes direction fast the moment the optimum is found somewhere else. 112 113 Size of the local swarms is 50, each with 10 agents. 114 115 After running 5 times, the best fitness found was $0.0000045481$. Improvement is 116 almost only shown in the first 100 iterations see figure~\ref{fig:pso-fitness}, 117 afterwards it quickly stalls. 118 Trying with a smaller number more and less show the same result as seen in 119 figure~\ref{fig:pso-small} 140 120 141 121 \begin{figure}[htp] 142 122 \begin{center} 143 \subfloat[ SA]{\label{fig:edge-a}\includegraphics[scale=0.4]{sa-fitness.eps}}144 \subfloat[ MCS]{\label{fig:edge-b}\includegraphics[scale=0.4]{mcs-fitness.eps}}123 \subfloat[80]{\label{fig:pso-fitness}\includegraphics[scale=0.4]{pso-fitness.eps}} 124 \subfloat[10]{\label{fig:pso-small}\includegraphics[scale=0.4]{pso-fitness-10.eps}} 145 125 \end{center} 146 126 \caption{Fitness throughout the iterations} … … 148 128 \end{figure} 149 129 130 Changing various flags like walkspeed $wander$ or changing the socialness of 131 the agents does not prove to change much in the algoritm result. 132 150 133 \section{Conclusions} 151 Looking at the graphs the fitness was still increasing so a larger iteration 152 size would make the fitness a better result. Secondly the \emph{SA} is 153 preforming much worse on large number than the \emph{MCS} one. It seems to the 154 temperature function is not working as expected. 134 Giving the lack of external results of other algorithms in large scale setups 135 (80 parameters) its hard to say whether this is a good or worse preforming 136 algorithm. 155 137 156 Both algoritms are preforming much worse then the best found solutions. 138 For furher research within the algorithm there are many knobs to tweak as well. 139 One could think of implementing a algorithm around this setup as well. 140 157 141 \section{Appendix 1} 158 142 \label{app:code} 159 \include{autocorrelation.m} 160 \include{initseq.m} 161 \include{mcs-call.m} 162 \include{mcs.m} 163 \include{mutation.m} 164 \include{randint.m} 165 \include{sa-call.m} 166 \include{sa.m} 143 \include{pso.m} 144 \include{SHGa.m} 145 \include{SHG.m} 167 146 \end{document}
Note:
See TracChangeset
for help on using the changeset viewer.