Python equivalent of MATLAB’s “ismember” function

After many attempts trying optimize code, it seems that one last resource would be to attempt to run the code below using multiple cores. I don’t know exactly how to convert/re-structure my code so that it can run much faster using multiple cores. I will appreciate if I could get guidance to achieve the end goal. The end goal is to be able to run this code as fast as possible for arrays A and B where each array holds about 700,000 elements. Here is the code using small arrays. The 700k element arrays are commented out.

Numpy Pure Functions for performance, caching

I’m writing some moderately performance critical code in numpy.
This code will be in the inner most loop, of a computation that’s run time is measured in hours.
A quick calculation suggest that this code will be executed up something like 10^12 times, in some variations of the calculation.

How do I put a constraint on SciPy curve fit?

I’m trying to fit the distribution of some experimental values with a custom probability density function. Obviously, the integral of the resulting function should always be equal to 1, but the results of simple scipy.optimize.curve_fit(function, dataBincenters, dataCounts) never satisfy this condition.
What is the best way to solve this problem?