Here's a brief example of defining a function in Python which is called by C++ code to
apply the function to every element in a NumPy array. If I have understood correctly, there should be no copying of data as the we are passing the pointer to the data. cythoncallbacknumpyarrays
0 Comments
I know almost nothing about encryption, but while sitting through a talk about secure quantum communications, I started daydreaming about the problem of trying to decipher an encrypted message. One of the greatest challenges in solving inverse problems is that there usually is not a unique input state that produces the output you are trying to match. It struck me that cracking a code would be a lot harder if there was not a bijective mapping between the original message and the encrypted data. I've since learned that this idea is by no means new, but I thought it would be a fun exercise to write some simple Python code to encrypt and decrypt a message. First, let's suppose we start with a text document which consists of only the 100 characters found in string.printable '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,./:;<=>?@[\\]^_`{}~ \t\n\r\x0b\x0c' Now we create a correspondence between symbols in this set and symbols in the output set. If each input symbol is replaced with an output symbol, it will be easy to crack. Instead, we are going to randomly assign 100 possible pairs of output symbols to each input symbol so each instance of a symbol in the message will be replaced with randomly selected pairs, which are nonetheless unique. Here is a code which generates these correspondences between input and output symbols and saves them as dictionaries. from string import printable from numpy import array, reshape from pickle import dump from itertools import product from random import shuffle if __name__ == '__main__': # Get list all characters alphanum = list(printable) N = len(alphanum) # Make all possible pairs prod = [''.join(p) for p in product(alphanum,alphanum)] # Shuffle shuffle(prod) Prod = [list(p) for p in reshape(array(prod),(N,N)).T] cipher = dict(zip(alphanum,Prod)) decipher = dict(zip(prod,alphanum*N)) with open('cipher.pickle','wb') as filename: dump(cipher,filename) with open('decipher.pickle','wb') as filename: dump(decipher,filename)
Once we generate the cipher and decipher dictionaries, we then need a bit of code that reads a file and randomly selects allowable substitutions. import sys import pickle from random import randint if __name__ == '__main__': infile = sys.argv[1] with open(infile,'r') as filename: text = ''.join(filename.readlines()) with open('cipher.pickle','rb') as filename: cipher = pickle.load(filename) encoded_text = ''.join([cipher[c][randint(0,99)] for c in text]) encoded_file = ''.join(['encoded_',infile]) with open(encoded_file,'w') as filename: filename.write(encoded_text)
Finally a script to decode the original message. import sys import pickle if __name__ == '__main__': infile = sys.argv[1] with open(infile,'r') as filename: text = ''.join(filename.readlines()) pairs = [text[i:i+2] for i in range(0,len(text),2)] with open('decipher.pickle','rb') as filename: decipher = pickle.load(filename) decoded_text = ''.join([decipher[p] for p in pairs]) print(decoded_text)
I think I have heard that codes are sometimes cracked by using assumptions of how often various characters appear in the source language. Since the human mind can easily see intelligible patterns among gibberish, whereas it is difficult to detect such patterns automatically, it seems like a good idea to pad out the message with meaningless symbols. uS_jw6_TV, This is a secret message n7g29.WA/ The encrypted message looks like HFP]+:/W6fXP SN7' n):LYcqpup[[0v=vxpd!od_ G6n!uX7',]h"JO)J"k&lSl GaP e7fM5E The decode script recovers the original. Perhaps someone knowledgeable about encryption can comment about how difficult a code like this would be to crack if you did not know anything about the cipher.
I'm working on a script that automatically downloads and builds LAPACK, ATLAS, and NumPy and one thing I needed to know for building ATLAS was the CPU frequency. Here's a short Python code for getting the CPU speed. 
cpuspeed.py  
File Size:  0 kb 
File Type:  py 
git clone https://github.com/gregvw/qho2electrons.git
 Download document & codes from GitHub git clone https://github.com/gregvw/qhocontrol.git 
a hemispherical quantum dot. Let's begin by using cylindrical coordinates, so that the z axis passes perpendicularly through the base of the dot and the potential is axisymmetric. The Hamiltonian is
\[
H = \frac{1}{r}\frac{\partial}{\partial r}\left(r \frac{\partial}{\partial r}\right)\frac{1}{r^2}\frac{\partial^2}{\partial\phi^2}\frac{\partial^2}{\partial z^2} + V(r,z)
\]
where
\[
V(r,z) =
\begin{cases}
0 & \text{if } r^2+z^2 < 1,\;z>0\\
V_0 & \text{otherwise}
\end{cases}
\]
We'll choose \(V_0=100\), but you can change it in the code. The ground state is going to be invariant under rotation about the z axis, so we can simply discard the angular derivative term in the Hamiltonian. Let the computation domain be \(\Omega=[0,R]\times[z_1,z_2]\) and define the inner product
\[
(u,v) = \int\limits_0^R \int\limits_{z_1}^{z_2} u(r,z) v(r,z) r dr dz
\]
We can write the eigenvalue problem in the weak form as
\[
(\varphi_r,\psi_r) + (\varphi_z,\psi_z) + (\varphi,V\psi) = \lambda(\varphi,\psi)
\]
Which must hold for all admissible test functions \(\varphi\).
One of the challenges of doing this the naive way that I did is that you will not know in advance what dependencies the installer will look for and you will only find out what is missing when the installer finally fails, which can take (in my case) up to 40 minutes, before the problem is identified and the process is restarted.
So for anyone who wants all these packages installed, here is a list of everything I installed on a fresh installation of Linux Mint 17 64 bit before running dorsal.sh. Note that what you need is a subset of this and I am listing some packages which may be extraneous.
wasteful as the mesh size needed to adequately resolve the "interesting" regions leads to way overresolving the "boring" ones, therefore a nonuniform mesh which is locally refined where it needs to be is a better approach. Whether or not we know where the interesting regions of the solution will be depends on the sort of problem we are solving. If it is a moving shock in a fluids problem, then an adaptive grid which depends on the solution of the PDE will be needed and there exist all sorts of methods for doing this. To keep things simple here, we are going to consider the case where the locations of rapidly varying solutions are known a priori.
\[
\xi_1,...,\xi_m\in[0,1]
\]
and now we want to find a set of grid points \(x_0,...,x_{n1}\in[0,1]\) such that the grid points have a higher density near the clustering locations.
To do this, we want a function \(phi:[0,1]\rightarrow[0,1]\) that is monotone increasing. Consider
an example function that does this
\[
\phi(x)=x+\frac{1}{10}\sin(3\pi x)
\]
Let's plot this function
This plot was produced using this Python script 

It follows then that we want to find a function \(\phi(x)\) such that \(\phi'(\xi_j)\) is
relatively small for \(j=1,...,m\), but \(\phi'(x)>0\) for all \(x\in[0,1]\). One way to ensure that the monotone growth condition is satisfied is to choose a function \(\phi'\) which is everywhere positive.
It's actually a bit easier to find the opposite of what is desired: a function \(\phi\) such that \(\phi(\xi_j)\) is large, so we will do this, but then effectively use the inverse map by finding the \(x_i\) such that
\(\phi(x_i)=\frac{i}{n1}\). Let \(\gamma_j\) be a parameter which is inversely related to how densely clustered we want the grid points around \(\xi_j\). We choose a mapping
\[
\tilde\phi(x)=\frac{m}{2}+\sum\limits_{j=1}^m\arctan\left(\frac{x\xi_j}{\gamma_j}\right)
\]
While this is monotone increasing and has the property that \(\phi'(\xi_j)\) is large, it does not satisfy the conditions \(\tilde\phi(0)=0\) and \(\tilde\phi(1)=1\). We can easily normalize to obtain the desired boundary values using an affine transformation
\[
\phi(x)=\frac{\tilde\phi(x)\tilde\phi(0)}{\tilde\phi(1)\tilde\phi(0)}
\]
The derivative of this map is
\[
\phi'(x) = \frac{1}{\tilde \phi(1)\tilde\phi(0)}
\sum\limits_{j=1}^m \frac{\gamma_j}{(x\xi_j)^2+\gamma_j^2}
\]
To obtain the grid points that we desire, we need to solve \(\phi(x_i)=\frac{i}{n1}\), which we can
do using Newton's method
\[
x_i^0 = \frac{i}{n1},\quad x_i^{k+1}=x_i^k\frac{\phi(x_i^k)x_i^0}{\phi'(x_i^k)}
\]
It can be the case that when the \(\gamma_i\) are quite small that this scheme does not converge. This can be overcome by using a continuation strategy, where we begin with a much larger \(\gamma_i\) than we want and make successive reductions until we reach the target value.
We could try to calculate an optimal way of doing this continuation, but it is such a quick and easy calculation, that we can also use a lot more continuation steps than are really needed, just to be safe.
Now we can construct a nonuniform grid with which to solve a boundary value problem. Consider the BVP
\[
(p(x)u')'=1,\quad u(0)=0,\quad u(1)=1
\]
where \(p(x)\) is a function that is positive, but is quite small at known locations. One such function is
\[
p(x) = 1.001 + \cos(4\pi x)
\]
We can solve this BVP quite easily using FEniCS. The implementation can be obtained from GitHub here. For 200 grid points and \(\xi_1=\frac{1}{4}\), \(\xi_2=\frac{3}{4}\) and \(\gamma_1=\gamma_2=\frac{1}{50}\), we obtain the following solution:
Run the codes using
$ python epm.py Si
$ python epm.py GaAs
epm.pdf  
File Size:  327 kb 
File Type: 
Archives
February 2015
January 2015
November 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
Categories
All
Basis Functions
Biharmonic
Boundary Conditions
Boundary Value Problem
Boundary Value Problems
Chebyshev
Differentiation
Discontinuous Galerkin
Domain Decomposition
Eigenvalue Problems
Fast Transform
Finite Difference
Finite Differences
Finite Element
Fourth Order
Galerkin
Galerkin Method
Initial Value Problems
Integral Equation
Integration
Iterative Solver
Jacobi Polynomials
Kronecker Product
Krylov Method
Lagrange Interpolation
Lagrange Polynomial
Legendre
Manufactured Solution
Modal
Newton
Nodal Galerkin
Nonuniform
Numerical Differentiation
Numerical Integration
Optimization
Plotting
Poisson Equation
Preconditioning
Pseudospectral
Pseudospectral Method
Quadrature
Quantum
Radau Quadrature
Radial Equation
Sipg
Spectral Methods
Symmetric
Symplectic Integration
Timedependent
Triangles
Two Dimensions
Weak Form