Goulib.optim module

various optimization algorithms : knapsack, traveling salesman, simulated annealing, differential evolution

class Goulib.optim.ObjectiveFunction(objective_function)[source]

Bases: object

class to wrap an objective function and keep track of the best solution evaluated

__init__(objective_function)[source]
__call__(solution)[source]
__class__

alias of type

__delattr__

Implement delattr(self, name).

__dir__() → list

default dir() implementation

__eq__

Return self==value.

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__gt__

Return self>value.

__hash__

Return hash(self).

__le__

Return self<=value.

__lt__

Return self<value.

__ne__

Return self!=value.

__new__()

Create and return a new object. See help(type) for accurate signature.

__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__

Return repr(self).

__setattr__

Implement setattr(self, name, value).

__sizeof__() → int

size of object in memory, in bytes

__str__

Return str(self).

Goulib.optim.nelder_mead(f, x_start, step=0.1, no_improve_thr=1e-05, no_improv_break=10, max_iter=0, alpha=1.0, gamma=2.0, rho=-0.5, sigma=0.5)[source]

Pure Python implementation of the Nelder-Mead algorithm. also called “downhill simplex method” taken from https://github.com/fchollet/nelder-mead

Reference: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method :param f: function to optimize, must return a scalar score

and operate over a numpy array of the same dimensions as x_start
Parameters:
  • x_start – (numpy array) initial position
  • step – (float) look-around radius in initial step
  • no_improv_break (no_improv_thr,) – (float,int): break after no_improv_break iterations with an improvement lower than no_improv_thr
  • max_iter – (int): always break after this number of iterations. Set it to 0 to loop indefinitely.
  • gamma, rho, sigma (alpha,) – (floats): parameters of the algorithm (see Wikipedia page for reference)
class Goulib.optim.BinDict(capacity, f=<function _Bin.<lambda>>)[source]

Bases: Goulib.optim._Bin, dict

a container with a limited capacity :param capacity: int,flot,tuple of whatever defines the capacity of the Bin :param f: function f(x) returning the capacity used by item x. Must return the empty capacity when f(0) is called

__isub__(key)[source]
__delitem__(key)
__iadd__(key, item)[source]
__setitem__(key, item)
__class__

alias of type

__contains__()

True if D has a key k, else False.

__delattr__

Implement delattr(self, name).

__dir__() → list

default dir() implementation

__eq__

Return self==value.

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__getitem__()

x.__getitem__(y) <==> x[y]

__gt__

Return self>value.

__hash__ = None
__init__(capacity, f=<function _Bin.<lambda>>)

a container with a limited capacity :param capacity: int,flot,tuple of whatever defines the capacity of the Bin :param f: function f(x) returning the capacity used by item x. Must return the empty capacity when f(0) is called

__iter__

Implement iter(self).

__le__

Return self<=value.

__len__

Return len(self).

__lt__

Return self<value.

__ne__

Return self!=value.

__new__()

Create and return a new object. See help(type) for accurate signature.

__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__()
__setattr__

Implement setattr(self, name, value).

__sizeof__() → size of D in memory, in bytes
__str__

Return str(self).

clear() → None. Remove all items from D.
copy() → a shallow copy of D
fits(item)
Returns:bool True if item fits in bin without exceeding capacity
fromkeys()

Returns a new dict with keys from iterable and values equal to value.

get(k[, d]) → D[k] if k in D, else d. d defaults to None.
items() → a set-like object providing a view on D's items
keys() → a set-like object providing a view on D's keys
pop(k[, d]) → v, remove specified key and return the corresponding value.

If key is not found, d is returned if given, otherwise KeyError is raised

popitem() → (k, v), remove and return some (key, value) pair as a

2-tuple; but raise KeyError if D is empty.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D
size()
update([E, ]**F) → None. Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() → an object providing a view on D's values
class Goulib.optim.BinList(capacity, f=<function _Bin.<lambda>>)[source]

Bases: Goulib.optim._Bin, list

a container with a limited capacity :param capacity: int,flot,tuple of whatever defines the capacity of the Bin :param f: function f(x) returning the capacity used by item x. Must return the empty capacity when f(0) is called

__iadd__(item)[source]
append(item)
insert(i, item)[source]
extend(more)[source]
__isub__(item)[source]
remove(item)
pop(i=-1)[source]
__setitem__(i, item)[source]

called when replacing a value in list

__add__

Return self+value.

__class__

alias of type

__contains__

Return key in self.

__delattr__

Implement delattr(self, name).

__delitem__

Delete self[key].

__dir__() → list

default dir() implementation

__eq__

Return self==value.

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__getitem__()

x.__getitem__(y) <==> x[y]

__gt__

Return self>value.

__hash__ = None
__imul__

Implement self*=value.

__init__(capacity, f=<function _Bin.<lambda>>)

a container with a limited capacity :param capacity: int,flot,tuple of whatever defines the capacity of the Bin :param f: function f(x) returning the capacity used by item x. Must return the empty capacity when f(0) is called

__iter__

Implement iter(self).

__le__

Return self<=value.

__len__

Return len(self).

__lt__

Return self<value.

__mul__

Return self*value.n

__ne__

Return self!=value.

__new__()

Create and return a new object. See help(type) for accurate signature.

__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__()
__reversed__()

L.__reversed__() – return a reverse iterator over the list

__rmul__

Return self*value.

__setattr__

Implement setattr(self, name, value).

__sizeof__()

L.__sizeof__() – size of L in memory, in bytes

__str__

Return str(self).

clear() → None -- remove all items from L
copy() → list -- a shallow copy of L
count(value) → integer -- return number of occurrences of value
fits(item)
Returns:bool True if item fits in bin without exceeding capacity
index(value[, start[, stop]]) → integer -- return first index of value.

Raises ValueError if the value is not present.

reverse()

L.reverse() – reverse IN PLACE

size()
sort(key=None, reverse=False) → None -- stable sort *IN PLACE*
Goulib.optim.first_fit_decreasing(items, bins, maxbins=0)[source]

fit items in bins using the “first fit decreasing” method :param items: iterable of items :param bins: iterable of Bin s. Must have at least one Bin :return: list of items that didn’t fit. (bins are filled by side-effect)

Goulib.optim.hillclimb(init_function, move_operator, objective_function, max_evaluations)[source]

hillclimb until either max_evaluations is reached or we are at a local optima

Goulib.optim.hillclimb_and_restart(init_function, move_operator, objective_function, max_evaluations)[source]

repeatedly hillclimb until max_evaluations is reached

Goulib.optim.P(prev_score, next_score, temperature)[source]
Goulib.optim.kirkpatrick_cooling(start_temp, alpha)[source]
Goulib.optim.anneal(init_function, move_operator, objective_function, max_evaluations, start_temp, alpha)[source]
Goulib.optim.reversed_sections(tour)[source]

generator to return all possible variations where the section between two cities are swapped

Goulib.optim.swapped_cities(tour)[source]

generator to create all possible variations where two cities have been swapped

Goulib.optim.tour_length(points, dist, tour=None)[source]

generator of point-to-point distances along a tour

Goulib.optim.tsp(points, dist, max_iterations=100, start_temp=None, alpha=None, close=True, rand=True)[source]

Traveling Salesman Problem @see http://en.wikipedia.org/wiki/Travelling_salesman_problem @param points : iterable containing all points @param dist : function returning the distance between 2 points : def dist(a,b): @param max_iterations :max number of optimization steps @param start_temp, alpha : params for the simulated annealing algorithm. if None, hill climbing is used @param close : computes closed TSP. if False, open TSP starting at points[0] @return iterations,score,best : number of iterations used, minimal length found, best path as list of indexes of points

class Goulib.optim.DifferentialEvolution(evaluator, population_size=50, f=None, cr=0.9, eps=0.01, n_cross=1, max_iter=10000, monitor_cycle=200, out=None, show_progress=False, show_progress_nth_cycle=1, insert_solution_vector=None, dither_constant=0.4)[source]

Bases: object

This is a python implementation of differential evolution taken from http://cci.lbl.gov/cctbx_sources/scitbx/differential_evolution.py

It assumes an evaluator class is passed in that has the following functionality data members:

n :: The number of parameters domain :: a list [(low,high)]*n

with approximate upper and lower limits for each parameter

x :: a place holder for a final solution

also a function called ‘target’ is needed. This function should take a parameter vector as input and return a the function to be minimized.

The code below was implemented on the basis of the following sources of information: 1. http://www.icsi.berkeley.edu/~storn/code.html 2. http://www.daimi.au.dk/~krink/fec05/articles/JV_ComparativeStudy_CEC04.pdf 3. http://ocw.mit.edu/NR/rdonlyres/Sloan-School-of-Management/15-099Fall2003/A40397B9-E8FB-4B45-A41B-D1F69218901F/0/ses2_storn_price.pdf

The developers of the differential evolution method have this advice: (taken from ref. 1)

If you are going to optimize your own objective function with DE, you may try the following classical settings for the input file first: Choose method e.g. DE/rand/1/bin, set the number of parents NP to 10 times the number of parameters, select weighting factor F=0.8, and crossover constant CR=0.9. It has been found recently that selecting F from the interval [0.5, 1.0] randomly for each generation or for each difference vector, a technique called dither, improves convergence behaviour significantly, especially for noisy objective functions. It has also been found that setting CR to a low value, e.g. CR=0.2 helps optimizing separable functions since it fosters the search along the coordinate axes. On the contrary this choice is not effective if parameter dependence is encountered, something which is frequently occuring in real-world optimization problems rather than artificial test functions. So for parameter dependence the choice of CR=0.9 is more appropriate. Another interesting empirical finding is that rasing NP above, say, 40 does not substantially improve the convergence, independent of the number of parameters. It is worthwhile to experiment with these suggestions. Make sure that you initialize your parameter vectors by exploiting their full numerical range, i.e. if a parameter is allowed to exhibit values in the range [-100, 100] it’s a good idea to pick the initial values from this range instead of unnecessarily restricting diversity.

Keep in mind that different problems often require different settings for NP, F and CR (have a look into the different papers to get a feeling for the settings). If you still get misconvergence you might want to try a different method. We mostly use DE/rand/1/… or DE/best/1/… . The crossover method is not so important although Ken Price claims that binomial is never worse than exponential. In case of misconvergence also check your choice of objective function. There might be a better one to describe your problem. Any knowledge that you have about the problem should be worked into the objective function. A good objective function can make all the difference.

Note: NP is called population size in the routine below.) Note: [0.5,1.0] dither is the default behavior unless f is set to a value other then None.

__init__(evaluator, population_size=50, f=None, cr=0.9, eps=0.01, n_cross=1, max_iter=10000, monitor_cycle=200, out=None, show_progress=False, show_progress_nth_cycle=1, insert_solution_vector=None, dither_constant=0.4)[source]
optimize()[source]
make_random_population()[source]
score_population()[source]
__class__

alias of type

__delattr__

Implement delattr(self, name).

__dir__() → list

default dir() implementation

__eq__

Return self==value.

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__gt__

Return self>value.

__hash__

Return hash(self).

__le__

Return self<=value.

__lt__

Return self<value.

__ne__

Return self!=value.

__new__()

Create and return a new object. See help(type) for accurate signature.

__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__

Return repr(self).

__setattr__

Implement setattr(self, name, value).

__sizeof__() → int

size of object in memory, in bytes

__str__

Return str(self).

evolve()[source]