content
stringlengths
189
4.87k
INSTRUCTION: Problem: I have a data-set which contains many numerical and categorical values, and I want to only test for outlying values on the numerical columns and remove rows based on those columns. I am trying it like this: df = df[(np.abs(stats.zscore(df)) < 3).all(axis=1)] Where it will remove all outlying values in all columns, however of course because I have categorical columns I am met with the following error: TypeError: unsupported operand type(s) for +: 'float' and 'str' I know the solution above works because if I limit my df to only contain numeric columns it all works fine but I don't want to lose the rest of the information in my dataframe in the process of evaluating outliers from numeric columns. A: <code> from scipy import stats import pandas as pd import numpy as np LETTERS = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ') df = pd.DataFrame({'NUM1': np.random.randn(50)*100, 'NUM2': np.random.uniform(0,1,50), 'NUM3': np.random.randint(100, size=50), 'CAT1': ["".join(np.random.choice(LETTERS,1)) for _ in range(50)], 'CAT2': ["".join(np.random.choice(['pandas', 'r', 'julia', 'sas', 'stata', 'spss'],1)) for _ in range(50)], 'CAT3': ["".join(np.random.choice(['postgres', 'mysql', 'sqlite', 'oracle', 'sql server', 'db2'],1)) for _ in range(50)] }) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df) </code> SOLUTION: df = df[(np.abs(stats.zscore(df.select_dtypes(exclude='object'))) < 3).all(axis=1)]
INSTRUCTION: Problem: I would like to write a program that solves the definite integral below in a loop which considers a different value of the constant c per iteration. I would then like each solution to the integral to be outputted into a new array. How do I best write this program in python? ∫2cxdx with limits between 0 and 1. from scipy import integrate integrate.quad Is acceptable here. My major struggle is structuring the program. Here is an old attempt (that failed) # import c fn = 'cooltemp.dat' c = loadtxt(fn,unpack=True,usecols=[1]) I=[] for n in range(len(c)): # equation eqn = 2*x*c[n] # integrate result,error = integrate.quad(lambda x: eqn,0,1) I.append(result) I = array(I) A: <code> import scipy.integrate c = 5 low = 0 high = 1 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = scipy.integrate.quadrature(lambda x: 2*c*x, low, high)[0]
INSTRUCTION: Problem: I am able to interpolate the data points (dotted lines), and am looking to extrapolate them in both direction. How can I extrapolate these curves in Python with NumPy/SciPy? The code I used for the interpolation is given below, import numpy as np import matplotlib.pyplot as plt from scipy import interpolate x = np.array([[0.12, 0.11, 0.1, 0.09, 0.08], [0.13, 0.12, 0.11, 0.1, 0.09], [0.15, 0.14, 0.12, 0.11, 0.1], [0.17, 0.15, 0.14, 0.12, 0.11], [0.19, 0.17, 0.16, 0.14, 0.12], [0.22, 0.19, 0.17, 0.15, 0.13], [0.24, 0.22, 0.19, 0.16, 0.14], [0.27, 0.24, 0.21, 0.18, 0.15], [0.29, 0.26, 0.22, 0.19, 0.16]]) y = np.array([[71.64, 78.52, 84.91, 89.35, 97.58], [66.28, 73.67, 79.87, 85.36, 93.24], [61.48, 69.31, 75.36, 81.87, 89.35], [57.61, 65.75, 71.7, 79.1, 86.13], [55.12, 63.34, 69.32, 77.29, 83.88], [54.58, 62.54, 68.7, 76.72, 82.92], [56.58, 63.87, 70.3, 77.69, 83.53], [61.67, 67.79, 74.41, 80.43, 85.86], [70.08, 74.62, 80.93, 85.06, 89.84]]) plt.figure(figsize = (5.15,5.15)) plt.subplot(111) for i in range(5): x_val = np.linspace(x[0, i], x[-1, i], 100) x_int = np.interp(x_val, x[:, i], y[:, i]) tck = interpolate.splrep(x[:, i], y[:, i], k = 2, s = 4) y_int = interpolate.splev(x_val, tck, der = 0) plt.plot(x[:, i], y[:, i], linestyle = '', marker = 'o') plt.plot(x_val, y_int, linestyle = ':', linewidth = 0.25, color = 'black') plt.xlabel('X') plt.ylabel('Y') plt.show() That seems only work for interpolation. I want to use B-spline (with the same parameters setting as in the code) in scipy to do extrapolation. The result should be (5, 100) array containing f(x_val) for each group of x, y(just as shown in the code). A: <code> from scipy import interpolate import numpy as np x = np.array([[0.12, 0.11, 0.1, 0.09, 0.08], [0.13, 0.12, 0.11, 0.1, 0.09], [0.15, 0.14, 0.12, 0.11, 0.1], [0.17, 0.15, 0.14, 0.12, 0.11], [0.19, 0.17, 0.16, 0.14, 0.12], [0.22, 0.19, 0.17, 0.15, 0.13], [0.24, 0.22, 0.19, 0.16, 0.14], [0.27, 0.24, 0.21, 0.18, 0.15], [0.29, 0.26, 0.22, 0.19, 0.16]]) y = np.array([[71.64, 78.52, 84.91, 89.35, 97.58], [66.28, 73.67, 79.87, 85.36, 93.24], [61.48, 69.31, 75.36, 81.87, 89.35], [57.61, 65.75, 71.7, 79.1, 86.13], [55.12, 63.34, 69.32, 77.29, 83.88], [54.58, 62.54, 68.7, 76.72, 82.92], [56.58, 63.87, 70.3, 77.69, 83.53], [61.67, 67.79, 74.41, 80.43, 85.86], [70.08, 74.62, 80.93, 85.06, 89.84]]) x_val = np.linspace(-1, 1, 100) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = np.zeros((5, 100)) for i in range(5): extrapolator = interpolate.UnivariateSpline(x[:, i], y[:, i], k = 2, s = 4) y_int = extrapolator(x_val) result[i, :] = y_int
INSTRUCTION: Problem: I have a sparse matrix in csr format (which makes sense for my purposes, as it has lots of rows but relatively few columns, ~8million x 90). My question is, what's the most efficient way to access a particular value from the matrix given a row,column tuple? I can quickly get a row using matrix.getrow(row), but this also returns 1-row sparse matrix, and accessing the value at a particular column seems clunky. The only reliable method I've found to get a particular matrix value, given the row and column, is: getting the row vector, converting to dense array, and fetching the element on column. But this seems overly verbose and complicated. and I don't want to change it to dense matrix to keep the efficiency. Is there a simpler/faster method I'm missing? A: <code> import numpy as np from scipy.sparse import csr_matrix arr = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) M = csr_matrix(arr) row = 2 column = 3 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = M[row,column]
INSTRUCTION: Problem: I have the following code to run Wilcoxon rank-sum test print stats.ranksums(pre_course_scores, during_course_scores) RanksumsResult(statistic=8.1341352369246582, pvalue=4.1488919597127145e-16) However, I am interested in extracting the pvalue from the result. I could not find a tutorial about this. i.e.Given two ndarrays, pre_course_scores, during_course_scores, I want to know the pvalue of ranksum. Can someone help? A: <code> import numpy as np from scipy import stats np.random.seed(10) pre_course_scores = np.random.randn(10) during_course_scores = np.random.randn(10) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(p_value) </code> SOLUTION: p_value = stats.ranksums(pre_course_scores, during_course_scores).pvalue
INSTRUCTION: Problem: Suppose I have a integer matrix which represents who has emailed whom and how many times. I want to find people that have not emailed each other. For social network analysis I'd like to make a simple undirected graph. So I need to convert the matrix to binary matrix. My question: is there a fast, convenient way to reduce the decimal matrix to a binary matrix. Such that: 26, 3, 0 3, 195, 1 0, 1, 17 Becomes: 0, 0, 1 0, 0, 0 1, 0, 0 A: <code> import scipy import numpy as np a = np.array([[26, 3, 0], [3, 195, 1], [0, 1, 17]]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(a) </code> SOLUTION: a = 1-np.sign(a)
INSTRUCTION: Problem: I have a binary array, say, a = np.random.binomial(n=1, p=1/2, size=(9, 9)). I perform median filtering on it using a 3 x 3 kernel on it, like say, b = nd.median_filter(a, 3). I would expect that this should perform median filter based on the pixel and its eight neighbours. However, I am not sure about the placement of the kernel. The documentation says, origin : scalar, optional. The origin parameter controls the placement of the filter. Default 0.0. Now, I want to shift this filter one cell to the right.How can I achieve it? Thanks. A: <code> import numpy as np import scipy.ndimage a= np.zeros((5, 5)) a[1:4, 1:4] = np.arange(3*3).reshape((3, 3)) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(b) </code> SOLUTION: b = scipy.ndimage.median_filter(a, size=(3, 3), origin=(0, 1))
INSTRUCTION: Problem: I am having a problem with minimization procedure. Actually, I could not create a correct objective function for my problem. Problem definition • My function: yn = a_11*x1**2 + a_12*x2**2 + ... + a_m*xn**2,where xn- unknowns, a_m - coefficients. n = 1..N, m = 1..M • In my case, N=5 for x1,..,x5 and M=3 for y1, y2, y3. I need to find the optimum: x1, x2,...,x5 so that it can satisfy the y My question: • How to solve the question using scipy.optimize? My code: (tried in lmfit, but return errors. Therefore I would ask for scipy solution) import numpy as np from lmfit import Parameters, minimize def func(x,a): return np.dot(a, x**2) def residual(pars, a, y): vals = pars.valuesdict() x = vals['x'] model = func(x,a) return (y - model) **2 def main(): # simple one: a(M,N) = a(3,5) a = np.array([ [ 0, 0, 1, 1, 1 ], [ 1, 0, 1, 0, 1 ], [ 0, 1, 0, 1, 0 ] ]) # true values of x x_true = np.array([10, 13, 5, 8, 40]) # data without noise y = func(x_true,a) #************************************ # Apriori x0 x0 = np.array([2, 3, 1, 4, 20]) fit_params = Parameters() fit_params.add('x', value=x0) out = minimize(residual, fit_params, args=(a, y)) print out if __name__ == '__main__': main() Result should be optimal x array. A: <code> import scipy.optimize import numpy as np np.random.seed(42) a = np.random.rand(3,5) x_true = np.array([10, 13, 5, 8, 40]) y = a.dot(x_true ** 2) x0 = np.array([2, 3, 1, 4, 20]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(out) </code> SOLUTION: def residual_ans(x, a, y): s = ((y - a.dot(x**2))**2).sum() return s out = scipy.optimize.minimize(residual_ans, x0=x0, args=(a, y), method= 'L-BFGS-B').x
INSTRUCTION: Problem: I'm trying to reduce noise in a binary python array by removing all completely isolated single cells, i.e. setting "1" value cells to 0 if they are completely surrounded by other "0"s like this: 0 0 0 0 1 0 0 0 0 I have been able to get a working solution by removing blobs with sizes equal to 1 using a loop, but this seems like a very inefficient solution for large arrays. In this case, eroding and dilating my array won't work as it will also remove features with a width of 1. I feel the solution lies somewhere within the scipy.ndimage package, but so far I haven't been able to crack it. Any help would be greatly appreciated! A: <code> import numpy as np import scipy.ndimage square = np.zeros((32, 32)) square[10:-10, 10:-10] = 1 np.random.seed(12) x, y = (32*np.random.random((2, 20))).astype(int) square[x, y] = 1 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(square) </code> SOLUTION: def filter_isolated_cells(array, struct): filtered_array = np.copy(array) id_regions, num_ids = scipy.ndimage.label(filtered_array, structure=struct) id_sizes = np.array(scipy.ndimage.sum(array, id_regions, range(num_ids + 1))) area_mask = (id_sizes == 1) filtered_array[area_mask[id_regions]] = 0 return filtered_array square = filter_isolated_cells(square, struct=np.ones((3,3)))
INSTRUCTION: Problem: I want to capture an integral of a column of my dataframe with a time index. This works fine for a grouping that happens every time interval. from scipy import integrate >>> df Time A 2017-12-18 19:54:40 -50187.0 2017-12-18 19:54:45 -60890.5 2017-12-18 19:54:50 -28258.5 2017-12-18 19:54:55 -8151.0 2017-12-18 19:55:00 -9108.5 2017-12-18 19:55:05 -12047.0 2017-12-18 19:55:10 -19418.0 2017-12-18 19:55:15 -50686.0 2017-12-18 19:55:20 -57159.0 2017-12-18 19:55:25 -42847.0 >>> integral_df = df.groupby(pd.Grouper(freq='25S')).apply(integrate.trapz) Time A 2017-12-18 19:54:35 -118318.00 2017-12-18 19:55:00 -115284.75 2017-12-18 19:55:25 0.00 Freq: 25S, Name: A, dtype: float64 EDIT: The scipy integral function automatically uses the time index to calculate it's result. This is not true. You have to explicitly pass the conversion to np datetime in order for scipy.integrate.trapz to properly integrate using time. See my comment on this question. But, i'd like to take a rolling integral instead. I've tried Using rolling functions found on SO, But the code was getting messy as I tried to workout my input to the integrate function, as these rolling functions don't return dataframes. How can I take a rolling integral over time over a function of one of my dataframe columns? A: <code> import pandas as pd import io from scipy import integrate string = ''' Time A 2017-12-18-19:54:40 -50187.0 2017-12-18-19:54:45 -60890.5 2017-12-18-19:54:50 -28258.5 2017-12-18-19:54:55 -8151.0 2017-12-18-19:55:00 -9108.5 2017-12-18-19:55:05 -12047.0 2017-12-18-19:55:10 -19418.0 2017-12-18-19:55:15 -50686.0 2017-12-18-19:55:20 -57159.0 2017-12-18-19:55:25 -42847.0 ''' df = pd.read_csv(io.StringIO(string), sep = '\s+') </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(integral_df) </code> SOLUTION: df.Time = pd.to_datetime(df.Time, format='%Y-%m-%d-%H:%M:%S') df = df.set_index('Time') integral_df = df.rolling('25S').apply(integrate.trapz)
INSTRUCTION: Problem: I have a sparse matrix in csr format (which makes sense for my purposes, as it has lots of rows but relatively few columns, ~8million x 90). My question is, what's the most efficient way to access particular values from the matrix given lists of row,column indices? I can quickly get a row using matrix.getrow(row), but this also returns 1-row sparse matrix, and accessing the value at a particular column seems clunky. The only reliable method I've found to get a particular matrix value, given the row and column, is: getting the row vector, converting to dense array, and fetching the element on column. But this seems overly verbose and complicated. and I don't want to change it to dense matrix to keep the efficiency. for example, I want to fetch elements at (2, 3) and (1, 0), so row = [2, 1], and column = [3, 0]. The result should be a list or 1-d array like: [matirx[2, 3], matrix[1, 0]] Is there a simpler/faster method I'm missing? A: <code> import numpy as np from scipy.sparse import csr_matrix arr = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) M = csr_matrix(arr) row = [2, 1] column = [3, 0] </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = np.array(M[row,column]).squeeze()
INSTRUCTION: Problem: Using scipy, is there an easy way to emulate the behaviour of MATLAB's dctmtx function which returns a NxN (ortho-mode normed) DCT matrix for some given N? There's scipy.fftpack.dctn but that only applies the DCT. Do I have to implement this from scratch if I don't want use another dependency besides scipy? A: <code> import numpy as np import scipy.fft as sf N = 8 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = sf.dct(np.eye(N), axis=0, norm= 'ortho')
INSTRUCTION: Problem: Basically, I am just trying to do a simple matrix multiplication, specifically, extract each column of it and normalize it by dividing it with its length. #csr sparse matrix self.__WeightMatrix__ = self.__WeightMatrix__.tocsr() #iterate through columns for Col in xrange(self.__WeightMatrix__.shape[1]): Column = self.__WeightMatrix__[:,Col].data List = [x**2 for x in Column] #get the column length Len = math.sqrt(sum(List)) #here I assumed dot(number,Column) would do a basic scalar product dot((1/Len),Column) #now what? how do I update the original column of the matrix, everything that have been returned are copies, which drove me nuts and missed pointers so much I've searched through the scipy sparse matrix documentations and got no useful information. I was hoping for a function to return a pointer/reference to the matrix so that I can directly modify its value. Thanks A: <code> from scipy import sparse import numpy as np import math sa = sparse.random(10, 10, density = 0.3, format = 'csr', random_state = 42) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(sa) </code> SOLUTION: sa = sparse.csr_matrix(sa.toarray() / np.sqrt(np.sum(sa.toarray()**2, axis=0)))
INSTRUCTION: Problem: I'd like to achieve a fourier series development for a x-y-dataset using numpy and scipy. At first I want to fit my data with the first 8 cosines and plot additionally only the first harmonic. So I wrote the following two function defintions: # fourier series defintions tau = 0.045 def fourier8(x, a1, a2, a3, a4, a5, a6, a7, a8): return a1 * np.cos(1 * np.pi / tau * x) + \ a2 * np.cos(2 * np.pi / tau * x) + \ a3 * np.cos(3 * np.pi / tau * x) + \ a4 * np.cos(4 * np.pi / tau * x) + \ a5 * np.cos(5 * np.pi / tau * x) + \ a6 * np.cos(6 * np.pi / tau * x) + \ a7 * np.cos(7 * np.pi / tau * x) + \ a8 * np.cos(8 * np.pi / tau * x) def fourier1(x, a1): return a1 * np.cos(1 * np.pi / tau * x) Then I use them to fit my data: # import and filename filename = 'data.txt' import numpy as np from scipy.optimize import curve_fit z, Ua = np.loadtxt(filename,delimiter=',', unpack=True) tau = 0.045 popt, pcov = curve_fit(fourier8, z, Ua) which works as desired But know I got stuck making it generic for arbitary orders of harmonics, e.g. I want to fit my data with the first fifteen harmonics. How could I achieve that without defining fourier1, fourier2, fourier3 ... , fourier15? By the way, initial guess of a1,a2,… should be set to default value. A: <code> from scipy.optimize import curve_fit import numpy as np s = '''1.000000000000000021e-03,2.794682735905079767e+02 4.000000000000000083e-03,2.757183469104809888e+02 1.400000000000000029e-02,2.791403179603880176e+02 2.099999999999999784e-02,1.781413355804160119e+02 3.300000000000000155e-02,-2.798375517344049968e+02 4.199999999999999567e-02,-2.770513900380149721e+02 5.100000000000000366e-02,-2.713769422793179729e+02 6.900000000000000577e-02,1.280740698304900036e+02 7.799999999999999989e-02,2.800801708984579932e+02 8.999999999999999667e-02,2.790400329037249776e+02'''.replace('\n', ';') arr = np.matrix(s) z = np.array(arr[:, 0]).squeeze() Ua = np.array(arr[:, 1]).squeeze() tau = 0.045 degree = 15 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(popt, pcov) </code> SOLUTION: def fourier(x, *a): ret = a[0] * np.cos(np.pi / tau * x) for deg in range(1, len(a)): ret += a[deg] * np.cos((deg+1) * np.pi / tau * x) return ret popt, pcov = curve_fit(fourier, z, Ua, [1.0] * degree)
INSTRUCTION: Problem: I am working with a 2D numpy array made of 512x512=262144 values. Such values are of float type and range from 0.0 to 1.0. The array has an X,Y coordinate system which originates in the top left corner: thus, position (0,0) is in the top left corner, while position (512,512) is in the bottom right corner. This is how the 2D array looks like (just an excerpt): X,Y,Value 0,0,0.482 0,1,0.49 0,2,0.496 0,3,0.495 0,4,0.49 0,5,0.489 0,6,0.5 0,7,0.504 0,8,0.494 0,9,0.485 I would like to be able to: Find the regions of cells which value exceeds a given threshold, say 0.75; Note: If two elements touch horizontally, vertically or diagnoally, they belong to one region. Determine the distance between the center of mass of such regions and the top left corner, which has coordinates (0,0). Please output the distances as a list. A: <code> import numpy as np from scipy import ndimage np.random.seed(10) gen = np.random.RandomState(0) img = gen.poisson(2, size=(512, 512)) img = ndimage.gaussian_filter(img.astype(np.double), (30, 30)) img -= img.min() img /= img.max() threshold = 0.75 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: blobs = img > threshold labels, nlabels = ndimage.label(blobs) r, c = np.vstack(ndimage.center_of_mass(img, labels, np.arange(nlabels) + 1)).T # find their distances from the top-left corner d = np.sqrt(r * r + c * c) result = sorted(d)
INSTRUCTION: Problem: After clustering a distance matrix with scipy.cluster.hierarchy.linkage, and assigning each sample to a cluster using scipy.cluster.hierarchy.cut_tree, I would like to extract one element out of each cluster, which is the k-th closest to that cluster's centroid. • I would be the happiest if an off-the-shelf function existed for this, but in the lack thereof: • some suggestions were already proposed here for extracting the centroids themselves, but not the closest-to-centroid elements. • Note that this is not to be confused with the centroid linkage rule in scipy.cluster.hierarchy.linkage. I have already carried out the clustering itself, just want to access the closest-to-centroid elements. What I want is the index of the k-closest element in original data for each cluster, i.e., result[0] is the index of the k-th closest element to centroid of cluster 0. A: <code> import numpy as np import scipy.spatial centroids = np.random.rand(5, 3) data = np.random.rand(100, 3) k = 3 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: def find_k_closest(centroids, data, k=1, distance_norm=2): kdtree = scipy.spatial.cKDTree(data) distances, indices = kdtree.query(centroids, k, p=distance_norm) if k > 1: indices = indices[:,-1] values = data[indices] return indices, values result, _ = find_k_closest(centroids, data, k)
INSTRUCTION: Problem: I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic). I use Python and Numpy and for polynomial fitting there is a function polyfit(). How do I fit y = A + Blogx using polyfit()? The result should be an np.array of [A, B] A: <code> import numpy as np import scipy x = np.array([1, 7, 20, 50, 79]) y = np.array([10, 19, 30, 35, 51]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = np.polyfit(np.log(x), y, 1)[::-1]
INSTRUCTION: Problem: I have a table of measured values for a quantity that depends on two parameters. So say I have a function fuelConsumption(speed, temperature), for which data on a mesh are known. Now I want to interpolate the expected fuelConsumption for a lot of measured data points (speed, temperature) from a pandas.DataFrame (and return a vector with the values for each data point). I am currently using SciPy's interpolate.interp2d for cubic interpolation, but when passing the parameters as two vectors [s1,s2] and [t1,t2] (only two ordered values for simplicity) it will construct a mesh and return: [[f(s1,t1), f(s2,t1)], [f(s1,t2), f(s2,t2)]] The result I am hoping to get is: [f(s1,t1), f(s2, t2)] How can I interpolate to get the output I want? I want to use function interpolated on x, y, z to compute values on arrays s and t, and the result should be like mentioned above. A: <code> import numpy as np import scipy.interpolate s = np.linspace(-1, 1, 50) t = np.linspace(-2, 0, 50) x, y = np.ogrid[-1:1:10j,-2:0:10j] z = (x + y)*np.exp(-6.0 * (x * x + y * y)) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: spl = scipy.interpolate.RectBivariateSpline(x, y, z) result = spl(s, t, grid=False)
INSTRUCTION: Problem: I have two csr_matrix, c1 and c2. I want a new sparse matrix Feature = [c1, c2], that is, to stack c1 and c2 horizontally to get a new sparse matrix. To make use of sparse matrix's memory efficiency, I don't want results as dense arrays. But if I directly concatenate them this way, there's an error that says the matrix Feature is a list. And if I try this: Feature = csr_matrix(Feature) It gives the error: Traceback (most recent call last): File "yelpfilter.py", line 91, in <module> Feature = csr_matrix(Feature) File "c:\python27\lib\site-packages\scipy\sparse\compressed.py", line 66, in __init__ self._set_self( self.__class__(coo_matrix(arg1, dtype=dtype)) ) File "c:\python27\lib\site-packages\scipy\sparse\coo.py", line 185, in __init__ self.row, self.col = M.nonzero() TypeError: __nonzero__ should return bool or int, returned numpy.bool_ Any help would be appreciated! A: <code> from scipy import sparse c1 = sparse.csr_matrix([[0, 0, 1, 0], [2, 0, 0, 0], [0, 0, 0, 0]]) c2 = sparse.csr_matrix([[0, 3, 4, 0], [0, 0, 0, 5], [6, 7, 0, 8]]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> #print(Feature) </code> SOLUTION: Feature = sparse.hstack((c1, c2)).tocsr()
INSTRUCTION: Problem: I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic). I use Python and Numpy and for polynomial fitting there is a function polyfit(). But I found no such functions for exponential and logarithmic fitting. How do I fit y = A*exp(Bx) + C ? The result should be an np.array of [A, B, C]. I know that polyfit performs bad for this function, so I would like to use curve_fit to solve the problem, and it should start from initial guess p0. A: <code> import numpy as np import scipy.optimize y = np.array([1, 7, 20, 50, 79]) x = np.array([10, 19, 30, 35, 51]) p0 = (4, 0.1, 1) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = scipy.optimize.curve_fit(lambda t,a,b, c: a*np.exp(b*t) + c, x, y, p0=p0)[0]
INSTRUCTION: Problem: Given two sets of points in n-dimensional space, how can one map points from one set to the other, such that each point is only used once and the total euclidean distance between the pairs of points is minimized? For example, import matplotlib.pyplot as plt import numpy as np # create six points in 2d space; the first three belong to set "A" and the # second three belong to set "B" x = [1, 2, 3, 1.8, 1.9, 3.4] y = [2, 3, 1, 2.6, 3.4, 0.4] colors = ['red'] * 3 + ['blue'] * 3 plt.scatter(x, y, c=colors) plt.show() So in the example above, the goal would be to map each red point to a blue point such that each blue point is only used once and the sum of the distances between points is minimized. The application I have in mind involves a fairly small number of datapoints in 3-dimensional space, so the brute force approach might be fine, but I thought I would check to see if anyone knows of a more efficient or elegant solution first. The result should be an assignment of points in second set to corresponding elements in the first set. For example, a matching solution is Points1 <-> Points2 0 --- 2 1 --- 0 2 --- 1 and the result is [2, 0, 1] A: <code> import numpy as np import scipy.spatial import scipy.optimize points1 = np.array([(x, y) for x in np.linspace(-1,1,7) for y in np.linspace(-1,1,7)]) N = points1.shape[0] points2 = 2*np.random.rand(N,2)-1 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: C = scipy.spatial.distance.cdist(points1, points2) _, result = scipy.optimize.linear_sum_assignment(C)
INSTRUCTION: Problem: First off, I'm no mathmatician. I admit that. Yet I still need to understand how ScyPy's sparse matrices work arithmetically in order to switch from a dense NumPy matrix to a SciPy sparse matrix in an application I have to work on. The issue is memory usage. A large dense matrix will consume tons of memory. The formula portion at issue is where a matrix is added to some scalars. A = V + x B = A + y Where V is a square sparse matrix (its large, say 60,000 x 60,000). What I want is that x, y will only be added to non-zero values in V. With a SciPy, not all sparse matrices support the same features, like scalar addition. dok_matrix (Dictionary of Keys) supports scalar addition, but it looks like (in practice) that it's allocating each matrix entry, effectively rendering my sparse dok_matrix as a dense matrix with more overhead. (not good) The other matrix types (CSR, CSC, LIL) don't support scalar addition. I could try constructing a full matrix with the scalar value x, then adding that to V. I would have no problems with matrix types as they all seem to support matrix addition. However I would have to eat up a lot of memory to construct x as a matrix, and the result of the addition could end up being fully populated matrix as well. There must be an alternative way to do this that doesn't require allocating 100% of a sparse matrix. I’d like to solve the problem on coo matrix first. I'm will to accept that large amounts of memory are needed, but I thought I would seek some advice first. Thanks. A: <code> from scipy import sparse V = sparse.random(10, 10, density = 0.05, format = 'coo', random_state = 42) x = 100 y = 99 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(V) </code> SOLUTION: V = V.copy() V.data += x V.eliminate_zeros() V.data += y V.eliminate_zeros()
INSTRUCTION: Problem: Scipy offers many useful tools for root finding, notably fsolve. Typically a program has the following form: def eqn(x, a, b): return x + 2*a - b**2 fsolve(eqn, x0=0.5, args = (a,b)) and will find a root for eqn(x) = 0 given some arguments a and b. However, what if I have a problem where I want to solve for the b variable, giving the function arguments in a and b? Of course, I could recast the initial equation as def eqn(b, x, a) but this seems long winded and inefficient. Instead, is there a way I can simply set fsolve (or another root finding algorithm) to allow me to choose which variable I want to solve for? Note that the result should be an array of roots for many (x, a) pairs. The function might have two roots for each setting, and I want to put the smaller one first, like this: result = [[2, 5], [-3, 4]] for two (x, a) pairs A: <code> import numpy as np from scipy.optimize import fsolve def eqn(x, a, b): return x + 2*a - b**2 xdata = np.arange(4)+3 adata = np.random.randint(0, 10, (4,)) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: A = np.array([fsolve(lambda b,x,a: eqn(x, a, b), x0=0, args=(x,a))[0] for x, a in zip(xdata, adata)]) temp = -A result = np.zeros((len(A), 2)) result[:, 0] = A result[:, 1] = temp
INSTRUCTION: Problem: I would like to resample a numpy array as suggested here Resampling a numpy array representing an image however this resampling will do so by a factor i.e. x = np.arange(9).reshape(3,3) print scipy.ndimage.zoom(x, 2, order=1) Will create a shape of (6,6) but how can I resample an array to its best approximation within a (4,6),(6,8) or (6,10) shape for instance? A: <code> import numpy as np import scipy.ndimage x = np.arange(9).reshape(3, 3) shape = (6, 8) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = scipy.ndimage.zoom(x, zoom=(shape[0]/x.shape[0], shape[1]/x.shape[1]), order=1)
INSTRUCTION: Problem: First off, I'm no mathmatician. I admit that. Yet I still need to understand how ScyPy's sparse matrices work arithmetically in order to switch from a dense NumPy matrix to a SciPy sparse matrix in an application I have to work on. The issue is memory usage. A large dense matrix will consume tons of memory. The formula portion at issue is where a matrix is added to a scalar. A = V + x Where V is a square sparse matrix (its large, say 60,000 x 60,000). x is a float. What I want is that x will only be added to non-zero values in V. With a SciPy, not all sparse matrices support the same features, like scalar addition. dok_matrix (Dictionary of Keys) supports scalar addition, but it looks like (in practice) that it's allocating each matrix entry, effectively rendering my sparse dok_matrix as a dense matrix with more overhead. (not good) The other matrix types (CSR, CSC, LIL) don't support scalar addition. I could try constructing a full matrix with the scalar value x, then adding that to V. I would have no problems with matrix types as they all seem to support matrix addition. However I would have to eat up a lot of memory to construct x as a matrix, and the result of the addition could end up being fully populated matrix as well. There must be an alternative way to do this that doesn't require allocating 100% of a sparse matrix. I’d like to solve the problem on coo matrix first. I'm will to accept that large amounts of memory are needed, but I thought I would seek some advice first. Thanks. A: <code> from scipy import sparse V = sparse.random(10, 10, density = 0.05, format = 'coo', random_state = 42) x = 100 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(V) </code> SOLUTION: V.data += x
INSTRUCTION: Problem: I have problems using scipy.sparse.csr_matrix: for instance: a = csr_matrix([[1,2,3],[4,5,6]]) b = csr_matrix([[7,8,9],[10,11,12]]) how to merge them into [[1,2,3,7,8,9],[4,5,6,10,11,12]] I know a way is to transfer them into numpy array first: csr_matrix(numpy.hstack((a.toarray(),b.toarray()))) but it won't work when the matrix is huge and sparse, because the memory would run out. so are there any way to merge them together in csr_matrix? any answers are appreciated! A: <code> from scipy import sparse sa = sparse.random(10, 10, density = 0.01, format = 'csr') sb = sparse.random(10, 10, density = 0.01, format = 'csr') </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = sparse.hstack((sa, sb)).tocsr()
INSTRUCTION: Problem: I have two csr_matrix, c1, c2. I want a new matrix Feature = [c1, c2]. But if I directly concatenate them horizontally this way, there's an error that says the matrix Feature is a list. How can I achieve the matrix concatenation and still get the same type of matrix, i.e. a csr_matrix? And it doesn't work if I do this after the concatenation: Feature = csr_matrix(Feature) It gives the error: Traceback (most recent call last): File "yelpfilter.py", line 91, in <module> Feature = csr_matrix(Feature) File "c:\python27\lib\site-packages\scipy\sparse\compressed.py", line 66, in __init__ self._set_self( self.__class__(coo_matrix(arg1, dtype=dtype)) ) File "c:\python27\lib\site-packages\scipy\sparse\coo.py", line 185, in __init__ self.row, self.col = M.nonzero() TypeError: __nonzero__ should return bool or int, returned numpy.bool_ A: <code> from scipy import sparse c1 = sparse.csr_matrix([[0, 0, 1, 0], [2, 0, 0, 0], [0, 0, 0, 0]]) c2 = sparse.csr_matrix([[0, 3, 4, 0], [0, 0, 0, 5], [6, 7, 0, 8]]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> #print(Feature) </code> SOLUTION: Feature = sparse.hstack((c1, c2)).tocsr()
INSTRUCTION: Problem: I have an array which I want to interpolate over the 1st axes. At the moment I am doing it like this example: import numpy as np from scipy.interpolate import interp1d array = np.random.randint(0, 9, size=(100, 100, 100)) new_array = np.zeros((1000, 100, 100)) x = np.arange(0, 100, 1) x_new = np.arange(0, 100, 0.1) for i in x: for j in x: f = interp1d(x, array[:, i, j]) new_array[:, i, j] = f(xnew) The data I use represents 10 years of 5-day averaged values for each latitude and longitude in a domain. I want to create an array of daily values. I have also tried using splines. I don't really know how they work but it was not much faster. Is there a way to do this without using for loops? The result I want is an np.array of transformed x_new values using interpolated function. Thank you in advance for any suggestions. A: <code> import numpy as np import scipy.interpolate array = np.random.randint(0, 9, size=(10, 10, 10)) x = np.linspace(0, 10, 10) x_new = np.linspace(0, 10, 100) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(new_array) </code> SOLUTION: new_array = scipy.interpolate.interp1d(x, array, axis=0)(x_new)
INSTRUCTION: Problem: I'm trying to create a 2-dimensional array in Scipy/Numpy where each value represents the euclidean distance from the center. I'm very new to Scipy, and would like to know if there's a more elegant, idiomatic way of doing the same thing. I found the scipy.spatial.distance.cdist function, which seems promising, but I'm at a loss regarding how to fit it into this problem. def get_distance_2(y, x): mid = ... # needs to be a array of the shape (rows, cols, 2)? return scipy.spatial.distance.cdist(scipy.dstack((y, x)), mid) Just to clarify, what I'm looking for is something like this (for a 6 x 6 array). That is, to compute (Euclidean) distances from center point to every point in the image. [[ 3.53553391 2.91547595 2.54950976 2.54950976 2.91547595 3.53553391] [ 2.91547595 2.12132034 1.58113883 1.58113883 2.12132034 2.91547595] [ 2.54950976 1.58113883 0.70710678 0.70710678 1.58113883 2.54950976] [ 2.54950976 1.58113883 0.70710678 0.70710678 1.58113883 2.54950976] [ 2.91547595 2.12132034 1.58113883 1.58113883 2.12132034 2.91547595] [ 3.53553391 2.91547595 2.54950976 2.54950976 2.91547595 3.53553391]] A: <code> import numpy as np from scipy.spatial import distance shape = (6, 6) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: xs, ys = np.indices(shape) xs = xs.reshape(shape[0] * shape[1], 1) ys = ys.reshape(shape[0] * shape[1], 1) X = np.hstack((xs, ys)) mid_x, mid_y = (shape[0]-1)/2.0, (shape[1]-1)/2.0 result = distance.cdist(X, np.atleast_2d([mid_x, mid_y])).reshape(shape)
INSTRUCTION: Problem: I have some data that comes in the form (x, y, z, V) where x,y,z are distances, and V is the moisture. I read a lot on StackOverflow about interpolation by python like this and this valuable posts, but all of them were about regular grids of x, y, z. i.e. every value of x contributes equally with every point of y, and every point of z. On the other hand, my points came from 3D finite element grid (as below), where the grid is not regular. The two mentioned posts 1 and 2, defined each of x, y, z as a separate numpy array then they used something like cartcoord = zip(x, y) then scipy.interpolate.LinearNDInterpolator(cartcoord, z) (in a 3D example). I can not do the same as my 3D grid is not regular, thus not each point has a contribution to other points, so if when I repeated these approaches I found many null values, and I got many errors. Here are 10 sample points in the form of [x, y, z, V] data = [[27.827, 18.530, -30.417, 0.205] , [24.002, 17.759, -24.782, 0.197] , [22.145, 13.687, -33.282, 0.204] , [17.627, 18.224, -25.197, 0.197] , [29.018, 18.841, -38.761, 0.212] , [24.834, 20.538, -33.012, 0.208] , [26.232, 22.327, -27.735, 0.204] , [23.017, 23.037, -29.230, 0.205] , [28.761, 21.565, -31.586, 0.211] , [26.263, 23.686, -32.766, 0.215]] I want to get the interpolated value V of the point (25, 20, -30) and (27, 20, -32) as a list. How can I get it? A: <code> import numpy as np import scipy.interpolate points = np.array([ [ 27.827, 18.53 , -30.417], [ 24.002, 17.759, -24.782], [ 22.145, 13.687, -33.282], [ 17.627, 18.224, -25.197], [ 29.018, 18.841, -38.761], [ 24.834, 20.538, -33.012], [ 26.232, 22.327, -27.735], [ 23.017, 23.037, -29.23 ], [ 28.761, 21.565, -31.586], [ 26.263, 23.686, -32.766]]) V = np.array([0.205, 0.197, 0.204, 0.197, 0.212, 0.208, 0.204, 0.205, 0.211, 0.215]) request = np.array([[25, 20, -30], [27, 20, -32]]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: result = scipy.interpolate.griddata(points, V, request).tolist()
INSTRUCTION: Problem: After clustering a distance matrix with scipy.cluster.hierarchy.linkage, and assigning each sample to a cluster using scipy.cluster.hierarchy.cut_tree, I would like to extract one element out of each cluster, which is the closest to that cluster's centroid. • I would be the happiest if an off-the-shelf function existed for this, but in the lack thereof: • some suggestions were already proposed here for extracting the centroids themselves, but not the closest-to-centroid elements. • Note that this is not to be confused with the centroid linkage rule in scipy.cluster.hierarchy.linkage. I have already carried out the clustering itself, just want to access the closest-to-centroid elements. What I want is the index of the closest element in original data for each cluster, i.e., result[0] is the index of the closest element to cluster 0. A: <code> import numpy as np import scipy.spatial centroids = np.random.rand(5, 3) data = np.random.rand(100, 3) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: def find_k_closest(centroids, data, k=1, distance_norm=2): kdtree = scipy.spatial.cKDTree(data) distances, indices = kdtree.query(centroids, k, p=distance_norm) if k > 1: indices = indices[:,-1] values = data[indices] return indices, values result, _ = find_k_closest(centroids, data)
INSTRUCTION: Problem: How to calculate kurtosis (according to Fisher’s definition) without bias correction? A: <code> import numpy as np import scipy.stats a = np.array([ 1. , 2. , 2.5, 400. , 6. , 0. ]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(kurtosis_result) </code> SOLUTION: kurtosis_result = scipy.stats.kurtosis(a)
INSTRUCTION: Problem: I am trying to optimise a function using the fminbound function of the scipy.optimize module. I want to set parameter bounds to keep the answer physically sensible (e.g. > 0). import scipy.optimize as sciopt import numpy as np The arrays: x = np.array([[ 1247.04, 1274.9 , 1277.81, 1259.51, 1246.06, 1230.2 , 1207.37, 1192. , 1180.84, 1182.76, 1194.76, 1222.65], [ 589. , 581.29, 576.1 , 570.28, 566.45, 575.99, 601.1 , 620.6 , 637.04, 631.68, 611.79, 599.19]]) y = np.array([ 1872.81, 1875.41, 1871.43, 1865.94, 1854.8 , 1839.2 , 1827.82, 1831.73, 1846.68, 1856.56, 1861.02, 1867.15]) I managed to optimise the linear function within the parameter bounds when I use only one parameter: fp = lambda p, x: x[0]+p*x[1] e = lambda p, x, y: ((fp(p,x)-y)**2).sum() pmin = 0.5 # mimimum bound pmax = 1.5 # maximum bound popt = sciopt.fminbound(e, pmin, pmax, args=(x,y)) This results in popt = 1.05501927245 However, when trying to optimise with multiple parameters, I get the following error message: fp = lambda p, x: p[0]*x[0]+p[1]*x[1] e = lambda p, x, y: ((fp(p,x)-y)**2).sum() pmin = np.array([0.5,0.5]) # mimimum bounds pmax = np.array([1.5,1.5]) # maximum bounds popt = sciopt.fminbound(e, pmin, pmax, args=(x,y)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/scipy/optimize/optimize.py", line 949, in fminbound if x1 > x2: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I have tried to vectorize e (np.vectorize) but the error message remains the same. I understand that fminbound expects a float or array scalar as bounds. Is there another function that would work for this problem? The result should be solutions for p[0] and p[1] that minimize the objective function. A: <code> import numpy as np import scipy.optimize as sciopt x = np.array([[ 1247.04, 1274.9 , 1277.81, 1259.51, 1246.06, 1230.2 , 1207.37, 1192. , 1180.84, 1182.76, 1194.76, 1222.65], [ 589. , 581.29, 576.1 , 570.28, 566.45, 575.99, 601.1 , 620.6 , 637.04, 631.68, 611.79, 599.19]]) y = np.array([ 1872.81, 1875.41, 1871.43, 1865.94, 1854.8 , 1839.2 , 1827.82, 1831.73, 1846.68, 1856.56, 1861.02, 1867.15]) fp = lambda p, x: p[0]*x[0]+p[1]*x[1] e = lambda p, x, y: ((fp(p,x)-y)**2).sum() pmin = np.array([0.5,0.7]) # mimimum bounds pmax = np.array([1.5,1.8]) # maximum bounds </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(result) </code> SOLUTION: p_guess = (pmin + pmax)/2 bounds = np.c_[pmin, pmax] fp = lambda p, x: p[0]*x[0]+p[1]*x[1] e = lambda p, x, y: ((fp(p,x)-y)**2).sum() sol = sciopt.minimize(e, p_guess, bounds=bounds, args=(x,y)) result = sol.x
INSTRUCTION: Problem: This question and answer demonstrate that when feature selection is performed using one of scikit-learn's dedicated feature selection routines, then the names of the selected features can be retrieved as follows: np.asarray(vectorizer.get_feature_names())[featureSelector.get_support()] For example, in the above code, featureSelector might be an instance of sklearn.feature_selection.SelectKBest or sklearn.feature_selection.SelectPercentile, since these classes implement the get_support method which returns a boolean mask or integer indices of the selected features. When one performs feature selection via linear models penalized with the L1 norm, it's unclear how to accomplish this. sklearn.svm.LinearSVC has no get_support method and the documentation doesn't make clear how to retrieve the feature indices after using its transform method to eliminate features from a collection of samples. Am I missing something here? Note use penalty='l1' and keep default arguments for others unless necessary A: <code> import numpy as np import pandas as pd import sklearn from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import LinearSVC corpus, y = load_data() assert type(corpus) == list assert type(y) == list vectorizer = TfidfVectorizer() X = vectorizer.fit_transform(corpus) def solve(corpus, y, vectorizer, X): </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> return selected_feature_names selected_feature_names = solve(corpus, y, vectorizer, X) print(selected_feature_names) </code> SOLUTION: # def solve(corpus, y, vectorizer, X): ### BEGIN SOLUTION svc = LinearSVC(penalty='l1', dual=False) svc.fit(X, y) selected_feature_names = np.asarray(vectorizer.get_feature_names_out())[np.flatnonzero(svc.coef_)] ### END SOLUTION # return selected_feature_names # selected_feature_names = solve(corpus, y, vectorizer, X)
INSTRUCTION: Problem: Hey all I am using sklearn.ensemble.IsolationForest, to predict outliers to my data. Is it possible to train (fit) the model once to my clean data, and then save it to use it for later? For example to save some attributes of the model, so the next time it isn't necessary to call again the fit function to train my model. For example, for GMM I would save the weights_, means_ and covs_ of each component, so for later I wouldn't need to train the model again. Just to make this clear, I am using this for online fraud detection, where this python script would be called many times for the same "category" of data, and I don't want to train the model EVERY time that I need to perform a predict, or test action. So is there a general solution? Thanks in advance. A: runnable code <code> import numpy as np import pandas as pd fitted_model = load_data() # Save the model in the file named "sklearn_model" </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION SOLUTION: import pickle with open('sklearn_model', 'wb') as f: pickle.dump(fitted_model, f)
INSTRUCTION: Problem: Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA clf = Pipeline([('AAA', PCA()), ('BBB', LinearSVC())]) clf Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object? A: Insert any step <code> import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.preprocessing import PolynomialFeatures estimators = [('reduce_poly', PolynomialFeatures()), ('dim_svm', PCA()), ('sVm_233', SVC())] clf = Pipeline(estimators) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(len(clf.steps)) </code> SOLUTION: clf.steps.insert(0, ('reduce_dim', PCA()))
INSTRUCTION: Problem: How do I convert data from a Scikit-learn Bunch object (from sklearn.datasets) to a Pandas DataFrame? from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this? A: <code> import numpy as np from sklearn.datasets import load_iris import pandas as pd data = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(data1) </code> SOLUTION: data1 = pd.DataFrame(data=np.c_[data['data'], data['target']], columns=data['feature_names'] + ['target'])
INSTRUCTION: Problem: Is there any package in Python that does data transformation like Box-Cox transformation to eliminate skewness of data? I know about sklearn, but I was unable to find functions to do Box-Cox transformation. How can I use sklearn to solve this? A: <code> import numpy as np import pandas as pd import sklearn data = load_data() assert type(data) == np.ndarray </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(box_cox_data) </code> SOLUTION: from sklearn import preprocessing pt = preprocessing.PowerTransformer(method="box-cox") box_cox_data = pt.fit_transform(data)
INSTRUCTION: Problem: I have a silly question. I have done Cross-validation in scikit learn and would like to make a more visual information with the values I got for each model. However, I can not access only the template name to insert into the dataframe. Always comes with the parameters together. Is there some method of objects created to access only the name of the model, without its parameters. Or will I have to create an external list with the names for it? I use: for model in models: scores = cross_val_score(model, X, y, cv=5) print(f'Name model: {model} , Mean score: {scores.mean()}') But I obtain the name with the parameters: Name model: LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False), Mean score: 0.8066782865537986 In fact I want to get the information this way: Name Model: LinearRegression, Mean Score: 0.8066782865537986 Thanks! A: <code> import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression model = LinearRegression() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(model_name) </code> SOLUTION: model_name = type(model).__name__
INSTRUCTION: Problem: I have a data which include dates in sorted order. I would like to split the given data to train and test set. However, I must to split the data in a way that the test have to be older than the train set. Please look at the given example: Let's assume that we have data by dates: 1, 2, 3, ..., n. The numbers from 1 to n represents the days. I would like to split it to 80% from the data to be train set and 20% of the data to be test set. Good results: 1) train set = 21, ..., 100 test set = 1, 2, 3, ..., 20 2) train set = 121, ... 200 test set = 101, 102, ... 120 My code: train_size = 0.8 train_dataframe, test_dataframe = cross_validation.train_test_split(features_dataframe, train_size=train_size) train_dataframe = train_dataframe.sort(["date"]) test_dataframe = test_dataframe.sort(["date"]) Does not work for me! Any suggestions? A: <code> import numpy as np import pandas as pd from sklearn.model_selection import train_test_split features_dataframe = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(train_dataframe) print(test_dataframe) </code> SOLUTION: n = features_dataframe.shape[0] train_size = 0.8 test_size = 1 - train_size + 0.005 train_dataframe = features_dataframe.iloc[int(n * test_size):] test_dataframe = features_dataframe.iloc[:int(n * test_size)]
INSTRUCTION: Problem: Does scikit-learn provide facility to use SVM for regression, using a polynomial kernel (degree=2)? I looked at the APIs and I don't see any. Has anyone built a package on top of scikit-learn that does this? Note to use default arguments A: <code> import numpy as np import pandas as pd import sklearn X, y = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray # fit, then predict X </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(predict) </code> SOLUTION: from sklearn.svm import SVR svr_poly = SVR(kernel='poly', degree=2) svr_poly.fit(X, y) predict = svr_poly.predict(X)
INSTRUCTION: Problem: I am trying to run an Elastic Net regression but get the following error: NameError: name 'sklearn' is not defined... any help is greatly appreciated! # ElasticNet Regression from sklearn import linear_model import statsmodels.api as sm ElasticNet = sklearn.linear_model.ElasticNet() # create a lasso instance ElasticNet.fit(X_train, y_train) # fit data # print(lasso.coef_) # print (lasso.intercept_) # print out the coefficients print ("R^2 for training set:"), print (ElasticNet.score(X_train, y_train)) print ('-'*50) print ("R^2 for test set:"), print (ElasticNet.score(X_test, y_test)) A: corrected code <code> import numpy as np import pandas as pd from sklearn import linear_model import statsmodels.api as sm X_train, y_train, X_test, y_test = load_data() assert type(X_train) == np.ndarray assert type(y_train) == np.ndarray assert type(X_test) == np.ndarray assert type(y_test) == np.ndarray </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(training_set_score) print(test_set_score) </code> SOLUTION: ElasticNet = linear_model.ElasticNet() ElasticNet.fit(X_train, y_train) training_set_score = ElasticNet.score(X_train, y_train) test_set_score = ElasticNet.score(X_test, y_test)
INSTRUCTION: Problem: I have set up a GridSearchCV and have a set of parameters, with I will find the best combination of parameters. My GridSearch consists of 12 candidate models total. However, I am also interested in seeing the accuracy score of all of the 12, not just the best score, as I can clearly see by using the .best_score_ method. I am curious about opening up the black box that GridSearch sometimes feels like. I see a scoring= argument to GridSearch, but I can't see any way to print out scores. Actually, I want the full results of GridSearchCV besides getting the score, in pandas dataframe sorted by mean_fit_time. Any advice is appreciated. Thanks in advance. A: <code> import numpy as np import pandas as pd from sklearn.model_selection import GridSearchCV GridSearch_fitted = load_data() assert type(GridSearch_fitted) == sklearn.model_selection._search.GridSearchCV </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(full_results) </code> SOLUTION: full_results = pd.DataFrame(GridSearch_fitted.cv_results_).sort_values(by="mean_fit_time")
INSTRUCTION: Problem: I used a sklearn function to transform some data to scipy.sparse.csr.csr_matrix. But now I want to get a pandas DataFrame where I merge it back into my original df along with the other columns. I tried pd.concat, but I get an error called TypeError: cannot concatenate a non-NDFrame object What can I do? Thanks. A: <code> import pandas as pd import numpy as np from scipy.sparse import csr_matrix df_origin, transform_output = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df) </code> SOLUTION: df = pd.concat([df_origin, pd.DataFrame(transform_output.toarray())], axis=1)
INSTRUCTION: Problem: Given the following example: from sklearn.feature_selection import SelectKBest from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline import pandas as pd pipe = Pipeline(steps=[ ('select', SelectKBest(k=2)), ('clf', LogisticRegression())] ) pipe.fit(data, target) I would like to get intermediate data state in scikit learn pipeline corresponding to 'select' output (after fit_transform on 'select' but not LogisticRegression). Or to say things in another way, it would be the same than to apply SelectKBest(k=2).fit_transform(data, target) Any ideas to do that? A: <code> import numpy as np from sklearn.feature_selection import SelectKBest from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline import pandas as pd data, target = load_data() pipe = Pipeline(steps=[ ('select', SelectKBest(k=2)), ('clf', LogisticRegression())] ) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(select_out) </code> SOLUTION: select_out = pipe.named_steps['select'].fit_transform(data, target)
INSTRUCTION: Problem: I need to perform hierarchical clustering(into 2 clusters) by a distance matrix describing their similarities, which is between different professors, like: prof1 prof2 prof3 prof1 0 0.8 0.9 prof2 0.8 0 0.2 prof3 0.9 0.2 0 data_matrix=[[0,0.8,0.9],[0.8,0,0.2],[0.9,0.2,0]] The expected number of clusters is 2. Can it be done using scipy.cluster.hierarchy? I tried to do that but failed. Anyone can give me some advice? prefer answer in a list like [label1, label2, ...] A: <code> import numpy as np import pandas as pd import scipy.cluster data_matrix = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(cluster_labels) </code> SOLUTION: Z = scipy.cluster.hierarchy.linkage(np.array(data_matrix), 'ward') cluster_labels = scipy.cluster.hierarchy.cut_tree(Z, n_clusters=2).reshape(-1, ).tolist()
INSTRUCTION: Problem: I was playing with the Titanic dataset on Kaggle (https://www.kaggle.com/c/titanic/data), and I want to use LabelEncoder from sklearn.preprocessing to transform Sex, originally labeled as 'male' into '1' and 'female' into '0'.. I had the following four lines of code, import pandas as pd from sklearn.preprocessing import LabelEncoder df = pd.read_csv('titanic.csv') df['Sex'] = LabelEncoder.fit_transform(df['Sex']) But when I ran it I received the following error message: TypeError: fit_transform() missing 1 required positional argument: 'y' the error comes from line 4, i.e., df['Sex'] = LabelEncoder.fit_transform(df['Sex']) I wonder what went wrong here. Although I know I could also do the transformation using map, which might be even simpler, but I still want to know what's wrong with my usage of LabelEncoder. A: Runnable code <code> import numpy as np import pandas as pd from sklearn.preprocessing import LabelEncoder df = load_data() def Transform(df): </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> return transformed_df transformed_df = Transform(df) print(transformed_df) </code> SOLUTION: # def Transform(df): ### BEGIN SOLUTION le = LabelEncoder() transformed_df = df.copy() transformed_df['Sex'] = le.fit_transform(df['Sex']) ### END SOLUTION # return transformed_df # transformed_df = Transform(df)
INSTRUCTION: Problem: Here is my code: count = CountVectorizer(lowercase = False) vocabulary = count.fit_transform([words]) print(count.get_feature_names()) For example if: words = "Hello @friend, this is a good day. #good." I want it to be separated into this: ['Hello', '@friend', 'this', 'is', 'a', 'good', 'day', '#good'] Currently, this is what it is separated into: ['Hello', 'friend', 'this', 'is', 'a', 'good', 'day'] A: runnable code <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer words = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(feature_names) </code> SOLUTION: count = CountVectorizer(lowercase=False, token_pattern='[a-zA-Z0-9$&+:;=@#|<>^*()%-]+') vocabulary = count.fit_transform([words]) feature_names = count.get_feature_names_out()
INSTRUCTION: Problem: I would like to predict the probability from Logistic Regression model with cross-validation. I know you can get the cross-validation scores, but is it possible to return the values from predict_proba instead of the scores? please save the probabilities into a list or an array. A: <code> import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.model_selection import StratifiedKFold X, y = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray cv = StratifiedKFold(5).split(X, y) logreg = LogisticRegression() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(proba) </code> SOLUTION: from sklearn.model_selection import cross_val_predict proba = cross_val_predict(logreg, X, y, cv=cv, method='predict_proba')
INSTRUCTION: Problem: I performed feature selection using ExtraTreesClassifier and SelectFromModel in data set that loaded as DataFrame, however i want to save these selected feature as a list(python type list) while maintaining columns name as well. So is there away to get selected columns names from SelectFromModel method? note that output is numpy array return important features whole columns not columns header. Please help me with the code below. import pandas as pd from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import numpy as np df = pd.read_csv('los_10_one_encoder.csv') y = df['LOS'] # target X= df.drop('LOS',axis=1) # drop LOS column clf = ExtraTreesClassifier(random_state=42) clf = clf.fit(X, y) print(clf.feature_importances_) model = SelectFromModel(clf, prefit=True) X_new = model.transform(X) A: <code> import pandas as pd from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import numpy as np X, y = load_data() clf = ExtraTreesClassifier(random_state=42) clf = clf.fit(X, y) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(column_names) </code> SOLUTION: model = SelectFromModel(clf, prefit=True) column_names = list(X.columns[model.get_support()])
INSTRUCTION: Problem: I would like to apply minmax scaler to column A2 and A3 in dataframe myData and add columns new_A2 and new_A3 for each month. myData = pd.DataFrame({ 'Month': [3, 3, 3, 3, 3, 3, 8, 8, 8, 8, 8, 8, 8], 'A1': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2], 'A2': [31, 13, 13, 13, 33, 33, 81, 38, 18, 38, 18, 18, 118], 'A3': [81, 38, 18, 38, 18, 18, 118, 31, 13, 13, 13, 33, 33], 'A4': [1, 1, 1, 1, 1, 1, 8, 8, 8, 8, 8, 8, 8], }) Below code is what I tried but got en error. from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() cols = myData.columns[2:4] myData['new_' + cols] = myData.groupby('Month')[cols].scaler.fit_transform(myData[cols]) How can I do this? Thank you. A: corrected, runnable code <code> import numpy as np from sklearn.preprocessing import MinMaxScaler import pandas as pd myData = pd.DataFrame({ 'Month': [3, 3, 3, 3, 3, 3, 8, 8, 8, 8, 8, 8, 8], 'A1': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2], 'A2': [31, 13, 13, 13, 33, 33, 81, 38, 18, 38, 18, 18, 118], 'A3': [81, 38, 18, 38, 18, 18, 118, 31, 13, 13, 13, 33, 33], 'A4': [1, 1, 1, 1, 1, 1, 8, 8, 8, 8, 8, 8, 8], }) scaler = MinMaxScaler() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(myData) </code> SOLUTION: cols = myData.columns[2:4] def scale(X): X_ = np.atleast_2d(X) return pd.DataFrame(scaler.fit_transform(X_), X.index) myData['new_' + cols] = myData.groupby('Month')[cols].apply(scale)
INSTRUCTION: Problem: I have a csv file without headers which I'm importing into python using pandas. The last column is the target class, while the rest of the columns are pixel values for images. How can I go ahead and split this dataset into a training set and a testing set (3 : 2)? Also, once that is done how would I also split each of those sets so that I can define x (all columns except the last one), and y (the last column)? I've imported my file using: dataset = pd.read_csv('example.csv', header=None, sep=',') Thanks A: use random_state=42 <code> import numpy as np import pandas as pd dataset = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(x_train) print(y_train) print(x_test) print(y_test) </code> SOLUTION: from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.4, random_state=42)
INSTRUCTION: Problem: Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA clf = Pipeline([('AAA', PCA()), ('BBB', LinearSVC())]) clf Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object? A: Delete any step <code> import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.preprocessing import PolynomialFeatures estimators = [('reduce_poly', PolynomialFeatures()), ('dim_svm', PCA()), ('sVm_233', SVC())] clf = Pipeline(estimators) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(len(clf.steps)) </code> SOLUTION: clf.steps.pop(-1)
INSTRUCTION: Problem: I have encountered a problem that, I want to get the intermediate result of a Pipeline instance in sklearn. However, for example, like this code below, I don't know how to get the intermediate data state of the tf_idf output, which means, right after fit_transform method of tf_idf, but not nmf. pipe = Pipeline([ ("tf_idf", TfidfVectorizer()), ("nmf", NMF()) ]) data = pd.DataFrame([["Salut comment tu vas", "Hey how are you today", "I am okay and you ?"]]).T data.columns = ["test"] pipe.fit_transform(data.test) Or in another way, it would be the same than to apply TfidfVectorizer().fit_transform(data.test) pipe.named_steps["tf_idf"] ti can get the transformer tf_idf, but yet I can't get data. Can anyone help me with that? A: <code> import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import NMF from sklearn.pipeline import Pipeline import pandas as pd data = load_data() pipe = Pipeline([ ("tf_idf", TfidfVectorizer()), ("nmf", NMF()) ]) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(tf_idf_out) </code> SOLUTION: pipe.fit_transform(data.test) tf_idf_out = pipe.named_steps['tf_idf'].transform(data.test)
INSTRUCTION: Problem: When using SelectKBest or SelectPercentile in sklearn.feature_selection, it's known that we can use following code to get selected features np.asarray(vectorizer.get_feature_names())[featureSelector.get_support()] However, I'm not clear how to perform feature selection when using linear models like LinearSVC, since LinearSVC doesn't have a get_support method. I can't find any other methods either. Am I missing something here? Thanks Note use penalty='l1' and keep default arguments for others unless necessary A: <code> import numpy as np import pandas as pd import sklearn from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import LinearSVC corpus, y = load_data() assert type(corpus) == list assert type(y) == list vectorizer = TfidfVectorizer() X = vectorizer.fit_transform(corpus) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(selected_feature_names) </code> SOLUTION: svc = LinearSVC(penalty='l1', dual=False) svc.fit(X, y) selected_feature_names = np.asarray(vectorizer.get_feature_names_out())[np.flatnonzero(svc.coef_)]
INSTRUCTION: Problem: I use linear SVM from scikit learn (LinearSVC) for binary classification problem. I understand that LinearSVC can give me the predicted labels, and the decision scores but I wanted probability estimates (confidence in the label). I want to continue using LinearSVC because of speed (as compared to sklearn.svm.SVC with linear kernel) Is it reasonable to use a logistic function to convert the decision scores to probabilities? import sklearn.svm as suppmach # Fit model: svmmodel=suppmach.LinearSVC(penalty='l1',C=1) predicted_test= svmmodel.predict(x_test) predicted_test_scores= svmmodel.decision_function(x_test) I want to check if it makes sense to obtain Probability estimates simply as [1 / (1 + exp(-x)) ] where x is the decision score. Alternately, are there other options wrt classifiers that I can use to do this efficiently? I think import CalibratedClassifierCV(cv=5) might solve this problem. So how to use this function to solve it? Thanks. use default arguments unless necessary A: <code> import numpy as np import pandas as pd import sklearn.svm as suppmach X, y, x_test = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray assert type(x_test) == np.ndarray # Fit model: svmmodel=suppmach.LinearSVC() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(proba) </code> SOLUTION: from sklearn.calibration import CalibratedClassifierCV calibrated_svc = CalibratedClassifierCV(svmmodel, cv=5, method='sigmoid') calibrated_svc.fit(X, y) proba = calibrated_svc.predict_proba(x_test)
INSTRUCTION: Problem: When trying to fit a Random Forest Regressor model with y data that looks like this: [ 0.00000000e+00 1.36094276e+02 4.46608221e+03 8.72660888e+03 1.31375786e+04 1.73580193e+04 2.29420671e+04 3.12216341e+04 4.11395711e+04 5.07972062e+04 6.14904935e+04 7.34275322e+04 7.87333933e+04 8.46302456e+04 9.71074959e+04 1.07146672e+05 1.17187952e+05 1.26953374e+05 1.37736003e+05 1.47239359e+05 1.53943242e+05 1.78806710e+05 1.92657725e+05 2.08912711e+05 2.22855152e+05 2.34532982e+05 2.41391255e+05 2.48699216e+05 2.62421197e+05 2.79544300e+05 2.95550971e+05 3.13524275e+05 3.23365158e+05 3.24069067e+05 3.24472999e+05 3.24804951e+05 And X data that looks like this: [ 735233.27082176 735234.27082176 735235.27082176 735236.27082176 735237.27082176 735238.27082176 735239.27082176 735240.27082176 735241.27082176 735242.27082176 735243.27082176 735244.27082176 735245.27082176 735246.27082176 735247.27082176 735248.27082176 With the following code: regressor = RandomForestRegressor(n_estimators=150, min_samples_split=1.0, random_state=42) rgr = regressor.fit(X,y) I get this error: ValueError: Number of labels=600 does not match number of samples=1 X data has only one feature and I assume one of my sets of values is in the wrong format but its not too clear to me from the documentation. A: <code> import numpy as np import pandas as pd from sklearn.ensemble import RandomForestRegressor X, y, X_test = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray assert type(X_test) == np.ndarray </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> predict = regressor.predict(X_test) print(predict) </code> SOLUTION: regressor = RandomForestRegressor(n_estimators=150, min_samples_split=1.0, random_state=42) regressor.fit(X.reshape(-1, 1), y)
INSTRUCTION: Problem: Given a list of variant length features, for example: f = [ ['t1'], ['t2', 't5', 't7'], ['t1', 't2', 't3', 't4', 't5'], ['t4', 't5', 't6'] ] where each sample has variant number of features and the feature dtype is str and already one hot. In order to use feature selection utilities of sklearn, I have to convert the features to a 2D-array which looks like: f t1 t2 t3 t4 t5 t6 t7 r1 0 1 1 1 1 1 1 r2 1 0 1 1 0 1 0 r3 0 0 0 0 0 1 1 r4 1 1 1 0 0 0 1 How could I achieve it via sklearn or numpy? A: <code> import pandas as pd import numpy as np import sklearn features = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(new_features) </code> SOLUTION: from sklearn.preprocessing import MultiLabelBinarizer new_features = MultiLabelBinarizer().fit_transform(features) rows, cols = new_features.shape for i in range(rows): for j in range(cols): if new_features[i, j] == 1: new_features[i, j] = 0 else: new_features[i, j] = 1
INSTRUCTION: Problem: Given a distance matrix, with similarity between various fruits : fruit1 fruit2 fruit3 fruit1 0 0.6 0.8 fruit2 0.6 0 0.111 fruit3 0.8 0.111 0 I need to perform hierarchical clustering on this data, where the above data is in the form of 2-d matrix simM=[[0,0.6,0.8],[0.6,0,0.111],[0.8,0.111,0]] The expected number of clusters is 2. I tried checking if I can implement it using sklearn.cluster AgglomerativeClustering but it is considering all the 3 rows as 3 separate vectors and not as a distance matrix. Can it be done using sklearn.cluster AgglomerativeClustering? prefer answer in a list like [label1, label2, ...] A: <code> import numpy as np import pandas as pd import sklearn.cluster simM = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(cluster_labels) </code> SOLUTION: model = sklearn.cluster.AgglomerativeClustering(affinity='precomputed', n_clusters=2, linkage='complete').fit(simM) cluster_labels = model.labels_
INSTRUCTION: Problem: I have a silly question. I have done Cross-validation in scikit learn and would like to make a more visual information with the values I got for each model. However, I can not access only the template name to insert into the dataframe. Always comes with the parameters together. Is there some method of objects created to access only the name of the model, without its parameters. Or will I have to create an external list with the names for it? I use: for model in models: scores = cross_val_score(model, X, y, cv=5) print(f'Name model: {model} , Mean score: {scores.mean()}') But I obtain the name with the parameters: Name model: model = LinearSVC(), Mean score: 0.8066782865537986 In fact I want to get the information this way: Name Model: LinearSVC, Mean Score: 0.8066782865537986 Thanks! A: <code> import numpy as np import pandas as pd from sklearn.svm import LinearSVC model = LinearSVC() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(model_name) </code> SOLUTION: model_name = type(model).__name__
INSTRUCTION: Problem: look at my code below: import pandas as pd from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import numpy as np df = pd.read_csv('los_10_one_encoder.csv') y = df['LOS'] # target X= df.drop('LOS',axis=1) # drop LOS column clf = ExtraTreesClassifier(random_state=42) clf = clf.fit(X, y) print(clf.feature_importances_) model = SelectFromModel(clf, prefit=True) X_new = model.transform(X) I used ExtraTreesClassifier and SelectFromModel to do feature selection in the data set which is loaded as pandas df. However, I also want to keep the column names of the selected feature. My question is, is there a way to get the selected column names out from SelectFromModel method? Note that output type is numpy array, and returns important features in whole columns, not columns header. Great thanks if anyone could help me. A: <code> import pandas as pd from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import numpy as np X, y = load_data() clf = ExtraTreesClassifier(random_state=42) clf = clf.fit(X, y) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(column_names) </code> SOLUTION: model = SelectFromModel(clf, prefit=True) column_names = X.columns[model.get_support()]
INSTRUCTION: Problem: I'd like to do some operations to my df. And there is an example below. df Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] after the operations, the df is converted into df Col1 Col2 Apple Orange Banana Grape C 33 1 1 1 0 A 2.5 1 0 0 1 B 42 0 0 1 0 Generally, I want this pandas column which consisting of a list of String names broken down into as many columns as the unique names. Maybe it's like one-hot-encode them (note that value 1 representing a given name existing in a row and then 0 is absence). Could any one give me any suggestion of pandas or sklearn methods? thanks! A: <code> import pandas as pd import numpy as np import sklearn df = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df_out) </code> SOLUTION: from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() df_out = df.join( pd.DataFrame( mlb.fit_transform(df.pop('Col3')), index=df.index, columns=mlb.classes_))
INSTRUCTION: Problem: My goal is to input 3 queries and find out which query is most similar to a set of 5 documents. So far I have calculated the tf-idf of the documents doing the following: from sklearn.feature_extraction.text import TfidfVectorizer def get_term_frequency_inverse_data_frequency(documents): vectorizer = TfidfVectorizer() matrix = vectorizer.fit_transform(documents) return matrix def get_tf_idf_query_similarity(documents, query): tfidf = get_term_frequency_inverse_data_frequency(documents) The problem I am having is now that I have tf-idf of the documents what operations do I perform on the query so I can find the cosine similarity to the documents? The answer should be like a 3*5 matrix of the similarities. A: <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer queries, documents = load_data() assert type(queries) == list assert type(documents) == list tfidf = TfidfVectorizer() tfidf.fit_transform(documents) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(cosine_similarities_of_queries) </code> SOLUTION: from sklearn.metrics.pairwise import cosine_similarity cosine_similarities_of_queries = [] for query in queries: query_tfidf = tfidf.transform([query]) cosine_similarities_of_queries.append(cosine_similarity(query_tfidf, tfidf.transform(documents)).flatten())
INSTRUCTION: Problem: I need to perform hierarchical clustering by a distance matrix describing their similarities, which is between different professors, like: prof1 prof2 prof3 prof1 0 0.8 0.9 prof2 0.8 0 0.2 prof3 0.9 0.2 0 data_matrix=[[0,0.8,0.9],[0.8,0,0.2],[0.9,0.2,0]] The expected number of clusters is 2. Can it be done using sklearn.cluster.AgglomerativeClustering? I tried to do that but failed. Anyone can give me some advice? prefer answer in a list like [label1, label2, ...] A: <code> import numpy as np import pandas as pd import sklearn.cluster data_matrix = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(cluster_labels) </code> SOLUTION: model = sklearn.cluster.AgglomerativeClustering(affinity='precomputed', n_clusters=2, linkage='complete').fit(data_matrix) cluster_labels = model.labels_
INSTRUCTION: Problem: I am new to scikit-learn, but it did what I was hoping for. Now, maddeningly, the only remaining issue is that I don't find how I could print the model's coefficients it estimated. Especially when it comes to a pipeline fitted by a GridSearch. Now I have a pipeline including data scaling, centering, and a classifier model. What is the way to get its estimated coefficients? here is my current code pipe = Pipeline([ ("scale", StandardScaler()), ("model", SGDClassifier(random_state=42)) ]) grid = GridSearchCV(pipe, param_grid={"model__alpha": [1e-3, 1e-2, 1e-1, 1]}, cv=5) # where is the coef? Any advice is appreciated. Thanks in advance. A: runnable code <code> import numpy as np import pandas as pd from sklearn.linear_model import SGDClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler X, y = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray pipe = Pipeline([ ("scale", StandardScaler()), ("model", SGDClassifier(random_state=42)) ]) grid = GridSearchCV(pipe, param_grid={"model__alpha": [1e-3, 1e-2, 1e-1, 1]}, cv=5) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(coef) </code> SOLUTION: grid.fit(X, y) coef = grid.best_estimator_.named_steps['model'].coef_
INSTRUCTION: Problem: How can I pass a preprocessor to TfidfVectorizer? I made a function "preprocess" that takes a string and returns a preprocessed string then I set processor parameter to that function "preprocessor=preprocess", but it doesn't work. I've searched so many times, but I didn't found any example as if no one use it. the preprocessor looks like def preprocess(s): return s.upper() A: <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(tfidf.preprocessor) </code> SOLUTION: def preprocess(s): return s.upper() tfidf = TfidfVectorizer(preprocessor=preprocess)
INSTRUCTION: Problem: Is it possible to delete or insert a certain step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA estimators = [('reduce_dim', PCA()), ('svm', SVC())] clf = Pipeline(estimators) clf Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object? A: Delete the 2nd step <code> import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.preprocessing import PolynomialFeatures estimators = [('reduce_dIm', PCA()), ('pOly', PolynomialFeatures()), ('svdm', SVC())] clf = Pipeline(estimators) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(clf.named_steps) </code> SOLUTION: clf.steps.pop(1)
INSTRUCTION: Problem: Here is my code: count = CountVectorizer(lowercase = False) vocabulary = count.fit_transform([words]) print(count.get_feature_names_out()) For example if: words = "ha @ji me te no ru bu ru wa, @na n te ko to wa na ka tsu ta wa. wa ta shi da ke no mo na ri za, mo u to kku ni " \ "#de a 't te ta ka ra" I want it to be separated into this: ['#de' '@ji' '@na' 'a' 'bu' 'da' 'ha' 'ka' 'ke' 'kku' 'ko' 'me' 'mo' 'n' 'na' 'ni' 'no' 'ra' 'ri' 'ru' 'shi' 't' 'ta' 'te' 'to' 'tsu' 'u' 'wa' 'za'] However, this is what it is separated into currently: ['bu' 'da' 'de' 'ha' 'ji' 'ka' 'ke' 'kku' 'ko' 'me' 'mo' 'na' 'ni' 'no' 'ra' 'ri' 'ru' 'shi' 'ta' 'te' 'to' 'tsu' 'wa' 'za'] A: runnable code <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer words = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(feature_names) </code> SOLUTION: count = CountVectorizer(lowercase=False, token_pattern='[a-zA-Z0-9$&+:;=@#|<>^*()%-]+') vocabulary = count.fit_transform([words]) feature_names = count.get_feature_names_out()
INSTRUCTION: Problem: When trying to fit a Random Forest Regressor model with y data that looks like this: [ 0.00 1.36 4.46 8.72 1.31 1.73 2.29 3.12 4.11 5.07 6.14 7.34 7.87 8.46 9.71 1.07 1.17 1.26 1.37 1.47 1.53 1.78 1.92 2.08 2.22 2.34 2.41 2.48 2.62 2.79 2.95 3.13 3.23 3.24 3.24 3.24 And X data that looks like this: [ 233.176 234.270 235.270 523.176 237.176 238.270 239.270 524.176 241.176 242.270 243.270 524.176 245.176 246.270 247.270 524.176 With the following code: regressor = RandomForestRegressor(n_estimators=150, min_samples_split=1.0, random_state=42) rgr = regressor.fit(X,y) I get this error: ValueError: Number of labels=600 does not match number of samples=1 X data has only one feature and I assume one of my sets of values is in the wrong format but its not too clear to me from the documentation. A: <code> import numpy as np import pandas as pd from sklearn.ensemble import RandomForestRegressor X, y, X_test = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray assert type(X_test) == np.ndarray </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> predict = regressor.predict(X_test) print(predict) </code> SOLUTION: regressor = RandomForestRegressor(n_estimators=150, min_samples_split=1.0, random_state=42) regressor.fit(X.reshape(-1, 1), y)
INSTRUCTION: Problem: My goal is to input some queries and find out which query is most similar to a set of documents. So far I have calculated the tf-idf of the documents doing the following: from sklearn.feature_extraction.text import TfidfVectorizer def get_term_frequency_inverse_data_frequency(documents): vectorizer = TfidfVectorizer() matrix = vectorizer.fit_transform(documents) return matrix def get_tf_idf_query_similarity(documents, query): tfidf = get_term_frequency_inverse_data_frequency(documents) The problem I am having is now that I have tf-idf of the documents what operations do I perform on the query so I can find the cosine similarity to the documents? The answer should be like a 3*5 matrix of the similarities. A: <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer queries, documents = load_data() assert type(queries) == list assert type(documents) == list tfidf = TfidfVectorizer() tfidf.fit_transform(documents) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(cosine_similarities_of_queries) </code> SOLUTION: from sklearn.metrics.pairwise import cosine_similarity cosine_similarities_of_queries = [] for query in queries: query_tfidf = tfidf.transform([query]) cosine_similarities_of_queries.append(cosine_similarity(query_tfidf, tfidf.transform(documents)).flatten())
INSTRUCTION: Problem: I would like to break down a pandas column, which is the last column, consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 1 representing a given element existing in a row and 0 in the case of absence). For example, taking dataframe df Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] I would like to convert this to: df Col1 Col2 Apple Orange Banana Grape C 33 1 1 1 0 A 2.5 1 0 0 1 B 42 0 0 1 0 Similarly, if the original df has four columns, then should do the operation to the 4th one. How can I use pandas/sklearn to achieve this? A: <code> import pandas as pd import numpy as np import sklearn df = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df_out) </code> SOLUTION: from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() df_out = df.join( pd.DataFrame( mlb.fit_transform(df.pop(df.columns[-1])), index=df.index, columns=mlb.classes_))
INSTRUCTION: Problem: I performed feature selection using ExtraTreesClassifier and SelectFromModel in data set that loaded as DataFrame, however i want to save these selected feature while maintaining columns name as well. So is there away to get selected columns names from SelectFromModel method? note that output is numpy array return important features whole columns not columns header. Please help me with the code below. import pandas as pd from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import numpy as np # read data, X is feature and y is target clf = ExtraTreesClassifier(random_state=42) clf = clf.fit(X, y) print(clf.feature_importances_) model = SelectFromModel(clf, prefit=True) X_new = model.transform(X) A: <code> import pandas as pd from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import numpy as np X, y = load_data() clf = ExtraTreesClassifier(random_state=42) clf = clf.fit(X, y) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(column_names) </code> SOLUTION: model = SelectFromModel(clf, prefit=True) column_names = X.columns[model.get_support()]
INSTRUCTION: Problem: Given a distance matrix, with similarity between various fruits : fruit1 fruit2 fruit3 fruit1 0 0.6 0.8 fruit2 0.6 0 0.111 fruit3 0.8 0.111 0 I need to perform hierarchical clustering on this data (into 2 clusters), where the above data is in the form of 2-d matrix simM=[[0,0.6,0.8],[0.6,0,0.111],[0.8,0.111,0]] The expected number of clusters is 2. Can it be done using scipy.cluster.hierarchy? prefer answer in a list like [label1, label2, ...] A: <code> import numpy as np import pandas as pd import scipy.cluster simM = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(cluster_labels) </code> SOLUTION: Z = scipy.cluster.hierarchy.linkage(np.array(simM), 'ward') cluster_labels = scipy.cluster.hierarchy.cut_tree(Z, n_clusters=2).reshape(-1, ).tolist()
INSTRUCTION: Problem: I have a data which include dates in sorted order. I would like to split the given data to train and test set. However, I must to split the data in a way that the test have to be newer than the train set. Please look at the given example: Let's assume that we have data by dates: 1, 2, 3, ..., n. The numbers from 1 to n represents the days. I would like to split it to 20% from the data to be train set and 80% of the data to be test set. Good results: 1) train set = 1, 2, 3, ..., 20 test set = 21, ..., 100 2) train set = 101, 102, ... 120 test set = 121, ... 200 My code: train_size = 0.2 train_dataframe, test_dataframe = cross_validation.train_test_split(features_dataframe, train_size=train_size) train_dataframe = train_dataframe.sort(["date"]) test_dataframe = test_dataframe.sort(["date"]) Does not work for me! Any suggestions? A: <code> import numpy as np import pandas as pd from sklearn.model_selection import train_test_split features_dataframe = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(train_dataframe) print(test_dataframe) </code> SOLUTION: n = features_dataframe.shape[0] train_size = 0.2 train_dataframe = features_dataframe.iloc[:int(n * train_size)] test_dataframe = features_dataframe.iloc[int(n * train_size):]
INSTRUCTION: Problem: I have used sklearn for Cross-validation and want to do a more visual information with the values of each model. The problem is, I can't only get the name of the templates. Instead, the parameters always come altogether. How can I only retrieve the name of the models without its parameters? Or does it mean that I have to create an external list for the names? here I have a piece of code: for model in models: scores = cross_val_score(model, X, y, cv=5) print(f'Name model: {model} , Mean score: {scores.mean()}') But I also obtain the parameters: Name model: LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False), Mean score: 0.8066782865537986 In fact I want to get the information this way: Name Model: LinearRegression, Mean Score: 0.8066782865537986 Any ideas to do that? Thanks! A: <code> import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression model = LinearRegression() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(model_name) </code> SOLUTION: model_name = type(model).__name__
INSTRUCTION: Problem: Is it possible to delete or insert a certain step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA estimators = [('reduce_dim', PCA()), ('svm', SVC())] clf = Pipeline(estimators) clf Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object? A: Insert ('t1919810', PCA()) right before 'svdm' <code> import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.preprocessing import PolynomialFeatures estimators = [('reduce_dIm', PCA()), ('pOly', PolynomialFeatures()), ('svdm', SVC())] clf = Pipeline(estimators) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(clf.named_steps) </code> SOLUTION: clf.steps.insert(2, ('t1919810', PCA()))
INSTRUCTION: Problem: I was playing with the Titanic dataset on Kaggle (https://www.kaggle.com/c/titanic/data), and I want to use LabelEncoder from sklearn.preprocessing to transform Sex, originally labeled as 'male' into '1' and 'female' into '0'.. I had the following four lines of code, import pandas as pd from sklearn.preprocessing import LabelEncoder df = pd.read_csv('titanic.csv') df['Sex'] = LabelEncoder.fit_transform(df['Sex']) But when I ran it I received the following error message: TypeError: fit_transform() missing 1 required positional argument: 'y' the error comes from line 4, i.e., df['Sex'] = LabelEncoder.fit_transform(df['Sex']) I wonder what went wrong here. Although I know I could also do the transformation using map, which might be even simpler, but I still want to know what's wrong with my usage of LabelEncoder. A: Runnable code <code> import numpy as np import pandas as pd from sklearn.preprocessing import LabelEncoder df = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(transformed_df) </code> SOLUTION: le = LabelEncoder() transformed_df = df.copy() transformed_df['Sex'] = le.fit_transform(df['Sex'])
INSTRUCTION: Problem: Right now, I have my data in a 3 by 3 numpy array. If I was to use MinMaxScaler fit_transform on the array, it will normalize it column by column, whereas I wish to normalize the entire np array all together. Is there anyway to do that? A: <code> import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler np_array = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(transformed) </code> SOLUTION: scaler = MinMaxScaler() X_one_column = np_array.reshape([-1, 1]) result_one_column = scaler.fit_transform(X_one_column) transformed = result_one_column.reshape(np_array.shape)
INSTRUCTION: Problem: I have some data structured as below, trying to predict t from the features. train_df t: time to predict f1: feature1 f2: feature2 f3:...... Can t be scaled with StandardScaler, so I instead predict t' and then inverse the StandardScaler to get back the real time? For example: from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(train_df['t']) train_df['t']= scaler.transform(train_df['t']) run regression model, check score, !! check predicted t' with real time value(inverse StandardScaler) <- possible? A: <code> import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler data = load_data() scaler = StandardScaler() scaler.fit(data) scaled = scaler.transform(data) def solve(data, scaler, scaled): </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> return inversed inversed = solve(data, scaler, scaled) print(inversed) </code> SOLUTION: # def solve(data, scaler, scaled): ### BEGIN SOLUTION inversed = scaler.inverse_transform(scaled) ### END SOLUTION # return inversed # inversed = solve(data, scaler, scaled)
INSTRUCTION: Problem: I would like to break down a pandas column, which is the last column, consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 0 representing a given element existing in a row and 1 in the case of absence). For example, taking dataframe df Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] I would like to convert this to: df Col1 Col2 Apple Orange Banana Grape C 33 0 0 0 1 A 2.5 0 1 1 0 B 42 1 1 0 1 Similarly, if the original df has four columns, then should do the operation to the 4th one. Could any one give me any suggestion of pandas or sklearn methods? thanks! A: <code> import pandas as pd import numpy as np import sklearn df = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df_out) </code> SOLUTION: from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() df_out = df.join( pd.DataFrame( mlb.fit_transform(df.pop(df.columns[-1])), index=df.index, columns=mlb.classes_)) for idx in df_out.index: for col in mlb.classes_: df_out.loc[idx, col] = 1 - df_out.loc[idx, col]
INSTRUCTION: Problem: I am trying to vectorize some data using sklearn.feature_extraction.text.CountVectorizer. This is the data that I am trying to vectorize: corpus = [ 'We are looking for Java developer', 'Frontend developer with knowledge in SQL and Jscript', 'And this is the third one.', 'Is this the first document?', ] Properties of the vectorizer are defined by the code below: vectorizer = CountVectorizer(stop_words="english",binary=True,lowercase=False,vocabulary={'Jscript','.Net','TypeScript','SQL', 'NodeJS','Angular','Mongo','CSS','Python','PHP','Photoshop','Oracle','Linux','C++',"Java",'TeamCity','Frontend','Backend','Full stack', 'UI Design', 'Web','Integration','Database design','UX'}) After I run: X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) print(X.toarray()) I get desired results but keywords from vocabulary are ordered alphabetically. The output looks like this: ['.Net', 'Angular', 'Backend', 'C++', 'CSS', 'Database design', 'Frontend', 'Full stack', 'Integration', 'Java', 'Jscript', 'Linux', 'Mongo', 'NodeJS', 'Oracle', 'PHP', 'Photoshop', 'Python', 'SQL', 'TeamCity', 'TypeScript', 'UI Design', 'UX', 'Web'] [ [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] ] As you can see, the vocabulary is not in the same order as I set it above. Is there a way to change this? Thanks A: <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer corpus = [ 'We are looking for Java developer', 'Frontend developer with knowledge in SQL and Jscript', 'And this is the third one.', 'Is this the first document?', ] </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(feature_names) print(X) </code> SOLUTION: vectorizer = CountVectorizer(stop_words="english", binary=True, lowercase=False, vocabulary=['Jscript', '.Net', 'TypeScript', 'SQL', 'NodeJS', 'Angular', 'Mongo', 'CSS', 'Python', 'PHP', 'Photoshop', 'Oracle', 'Linux', 'C++', "Java", 'TeamCity', 'Frontend', 'Backend', 'Full stack', 'UI Design', 'Web', 'Integration', 'Database design', 'UX']) X = vectorizer.fit_transform(corpus).toarray() feature_names = vectorizer.get_feature_names_out()
INSTRUCTION: Problem: I am attempting to train models with GradientBoostingClassifier using categorical variables. The following is a primitive code sample, just for trying to input categorical variables into GradientBoostingClassifier. from sklearn import datasets from sklearn.ensemble import GradientBoostingClassifier import pandas iris = datasets.load_iris() # Use only data for 2 classes. X = iris.data[(iris.target==0) | (iris.target==1)] Y = iris.target[(iris.target==0) | (iris.target==1)] # Class 0 has indices 0-49. Class 1 has indices 50-99. # Divide data into 80% training, 20% testing. train_indices = list(range(40)) + list(range(50,90)) test_indices = list(range(40,50)) + list(range(90,100)) X_train = X[train_indices] X_test = X[test_indices] y_train = Y[train_indices] y_test = Y[test_indices] X_train = pandas.DataFrame(X_train) # Insert fake categorical variable. # Just for testing in GradientBoostingClassifier. X_train[0] = ['a']*40 + ['b']*40 # Model. clf = GradientBoostingClassifier(learning_rate=0.01,max_depth=8,n_estimators=50).fit(X_train, y_train) The following error appears: ValueError: could not convert string to float: 'b' From what I gather, it seems that One Hot Encoding on categorical variables is required before GradientBoostingClassifier can build the model. Can GradientBoostingClassifier build models using categorical variables without having to do one hot encoding? I want to convert categorical variable to matrix and merge back with original training data use get_dummies in pandas. R gbm package is capable of handling the sample data above. I'm looking for a Python library with equivalent capability and get_dummies seems good. A: <code> import numpy as np import pandas as pd from sklearn import datasets from sklearn.ensemble import GradientBoostingClassifier import pandas # load data in the example X_train, y_train = load_data() X_train[0] = ['a'] * 40 + ['b'] * 40 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> clf = GradientBoostingClassifier(learning_rate=0.01, max_depth=8, n_estimators=50).fit(X_train, y_train) </code> SOLUTION: catVar = pd.get_dummies(X_train[0]).to_numpy() X_train = np.concatenate((X_train.iloc[:, 1:], catVar), axis=1)
INSTRUCTION: Problem: I have used the sklearn.preprocessing.OneHotEncoder to transform some data the output is scipy.sparse.csr.csr_matrix how can I merge it back into my original dataframe along with the other columns? I tried to use pd.concat but I get TypeError: cannot concatenate a non-NDFrame object Thanks A: <code> import pandas as pd import numpy as np from scipy.sparse import csr_matrix df_origin, transform_output = load_data() def solve(df, transform_output): </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> return result df = solve(df_origin, transform_output) print(df) </code> SOLUTION: # def solve(df, transform_output): ### BEGIN SOLUTION result = pd.concat([df, pd.DataFrame(transform_output.toarray())], axis=1) ### END SOLUTION # return result # df = solve(df_origin, transform_output)
INSTRUCTION: Problem: I have a csv file which looks like below date mse 2018-02-11 14.34 2018-02-12 7.24 2018-02-13 4.5 2018-02-14 3.5 2018-02-16 12.67 2018-02-21 45.66 2018-02-22 15.33 2018-02-24 98.44 2018-02-26 23.55 2018-02-27 45.12 2018-02-28 78.44 2018-03-01 34.11 2018-03-05 23.33 2018-03-06 7.45 ... ... Now I want to get two clusters for the mse values so that I know what values lies to which cluster and their mean. Now since I do not have any other set of values apart from mse (I have to provide X and Y), I would like to use just mse values to get a k means cluster.For now for the other set of values, I pass it as range which is of same size as no of mse values.This is what I did from sklearn.cluster import KMeans import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D df = pd.read_csv("generate_csv/all_data_device.csv", parse_dates=["date"]) f1 = df['mse'].values # generate another list f2 = list(range(0, len(f1))) X = np.array(list(zip(f1, f2))) kmeans = KMeans(n_clusters=2).fit(X) labels = kmeans.predict(X) # Centroid values centroids = kmeans.cluster_centers_ #print(centroids) fig = plt.figure() ax = Axes3D(fig) ax.scatter(X[:, 0], X[:, 1], c=labels) ax.scatter(centroids[:, 0], centroids[:, 1], marker='*', c='#050505', s=1000) plt.title('K Mean Classification') plt.show() How can I just use the mse values to get the k means cluster? I am aware of the function 'reshape()' but not quite sure how to use it? A: <code> from sklearn.cluster import KMeans df = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(labels) </code> SOLUTION: kmeans = KMeans(n_clusters=2) labels = kmeans.fit_predict(df[['mse']])
INSTRUCTION: Problem: I have some data structured as below, trying to predict t from the features. train_df t: time to predict f1: feature1 f2: feature2 f3:...... Can t be scaled with StandardScaler, so I instead predict t' and then inverse the StandardScaler to get back the real time? For example: from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(train_df['t']) train_df['t']= scaler.transform(train_df['t']) run regression model, check score, !! check predicted t' with real time value(inverse StandardScaler) <- possible? A: <code> import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler data = load_data() scaler = StandardScaler() scaler.fit(data) scaled = scaler.transform(data) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(inversed) </code> SOLUTION: inversed = scaler.inverse_transform(scaled)
INSTRUCTION: Problem: How do I convert data from a Scikit-learn Bunch object (from sklearn.datasets) to a Pandas DataFrame? from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this? A: <code> import numpy as np from sklearn.datasets import load_iris import pandas as pd data = load_data() def solve(data): </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> return result data1 = solve(data) print(data1) </code> SOLUTION: # def solve(data): ### BEGIN SOLUTION result = pd.DataFrame(data=np.c_[data['data'], data['target']], columns=data['feature_names'] + ['target']) ### END SOLUTION # return result # data1 = solve(data)
INSTRUCTION: Problem: Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA estimators = [('reduce_dim', PCA()), ('svm', SVC())] clf = Pipeline(estimators) clf Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object? A: Insert any step <code> import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.preprocessing import PolynomialFeatures estimators = [('reduce_dim', PCA()), ('poly', PolynomialFeatures()), ('svm', SVC())] clf = Pipeline(estimators) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(len(clf.steps)) </code> SOLUTION: clf.steps.insert(0, ('reduce_dim', PCA()))
INSTRUCTION: Problem: I have a pandas DataFrame data it has about 12k rows and more than 500 columns, each column has its unique name However, when I used sklearn preprocessing, I found the result lose the information about the columns Here's the code from sklearn import preprocessing preprocessing.scale(data) outputs a numpy array. So my question is, how to apply preprocessing.scale to DataFrames, and don't lose the information(index, columns)? A: <code> import numpy as np import pandas as pd from sklearn import preprocessing data = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df_out) </code> SOLUTION: df_out = pd.DataFrame(preprocessing.scale(data), index=data.index, columns=data.columns)
INSTRUCTION: Problem: I am trying to vectorize some data using sklearn.feature_extraction.text.CountVectorizer. This is the data that I am trying to vectorize: corpus = [ 'We are looking for Java developer', 'Frontend developer with knowledge in SQL and Jscript', 'And this is the third one.', 'Is this the first document?', ] Properties of the vectorizer are defined by the code below: vectorizer = CountVectorizer(stop_words="english",binary=True,lowercase=False,vocabulary={'Jscript','.Net','TypeScript','NodeJS','Angular','Mongo','CSS','Python','PHP','Photoshop','Oracle','Linux','C++',"Java",'TeamCity','Frontend','Backend','Full stack', 'UI Design', 'Web','Integration','Database design','UX'}) After I run: X = vectorizer.fit_transform(corpus) print(vectorizer.get_feature_names()) print(X.toarray()) I get desired results but keywords from vocabulary are ordered alphabetically. The output looks like this: ['.Net', 'Angular', 'Backend', 'C++', 'CSS', 'Database design', 'Frontend', 'Full stack', 'Integration', 'Java', 'Jscript', 'Linux', 'Mongo', 'NodeJS', 'Oracle', 'PHP', 'Photoshop', 'Python', 'TeamCity', 'TypeScript', 'UI Design', 'UX', 'Web'] [ [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] ] As you can see, the vocabulary is not in the same order as I set it above. Is there a way to change this? And actually, I want my result X be like following instead, if the order of vocabulary is correct, so there should be one more step [ [1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1] [1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1] [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] ] (note this is incorrect but for result explanation) Thanks A: <code> import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer corpus = [ 'We are looking for Java developer', 'Frontend developer with knowledge in SQL and Jscript', 'And this is the third one.', 'Is this the first document?', ] </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(feature_names) print(X) </code> SOLUTION: vectorizer = CountVectorizer(stop_words="english", binary=True, lowercase=False, vocabulary=['Jscript', '.Net', 'TypeScript', 'NodeJS', 'Angular', 'Mongo', 'CSS', 'Python', 'PHP', 'Photoshop', 'Oracle', 'Linux', 'C++', "Java", 'TeamCity', 'Frontend', 'Backend', 'Full stack', 'UI Design', 'Web', 'Integration', 'Database design', 'UX']) X = vectorizer.fit_transform(corpus).toarray() X = 1 - X feature_names = vectorizer.get_feature_names_out()
INSTRUCTION: Problem: I would like to break down a pandas column, which is the last column, consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 1 representing a given element existing in a row and 0 in the case of absence). For example, taking dataframe df Col1 Col2 Col3 Col4 C 33 11 [Apple, Orange, Banana] A 2.5 4.5 [Apple, Grape] B 42 14 [Banana] D 666 1919810 [Suica, Orange] I would like to convert this to: df Col1 Col2 Col3 Apple Banana Grape Orange Suica C 33 11 1 1 0 1 0 A 2.5 4.5 1 0 1 0 0 B 42 14 0 1 0 0 0 D 666 1919810 0 0 0 1 1 How can I use pandas/sklearn to achieve this? A: <code> import pandas as pd import numpy as np import sklearn df = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(df_out) </code> SOLUTION: from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() df_out = df.join( pd.DataFrame( mlb.fit_transform(df.pop('Col4')), index=df.index, columns=mlb.classes_))
INSTRUCTION: Problem: Is there any package in Python that does data transformation like Yeo-Johnson transformation to eliminate skewness of data? In R this could be done using caret package: set.seed(1) predictors = data.frame(x1 = rnorm(1000, mean = 5, sd = 2), x2 = rexp(1000, rate=10)) require(caret) trans = preProcess(predictors, c("BoxCox", "center", "scale")) predictorsTrans = data.frame( trans = predict(trans, predictors)) I know about sklearn, but I was unable to find functions to do Yeo-Johnson transformation. How can I use sklearn to solve this? A: <code> import numpy as np import pandas as pd import sklearn data = load_data() assert type(data) == np.ndarray </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(yeo_johnson_data) </code> SOLUTION: from sklearn import preprocessing pt = preprocessing.PowerTransformer(method="yeo-johnson") yeo_johnson_data = pt.fit_transform(data)
INSTRUCTION: Problem: So I fed the testing data, but when I try to test it with clf.predict() it just gives me an error. So I want it to predict on the data that i give, which is the last close price, the moving averages. However everytime i try something it just gives me an error. Also is there a better way to do this than on pandas. from sklearn import tree import pandas as pd import pandas_datareader as web import numpy as np df = web.DataReader('goog', 'yahoo', start='2012-5-1', end='2016-5-20') df['B/S'] = (df['Close'].diff() < 0).astype(int) closing = (df.loc['2013-02-15':'2016-05-21']) ma_50 = (df.loc['2013-02-15':'2016-05-21']) ma_100 = (df.loc['2013-02-15':'2016-05-21']) ma_200 = (df.loc['2013-02-15':'2016-05-21']) buy_sell = (df.loc['2013-02-15':'2016-05-21']) # Fixed close = pd.DataFrame(closing) ma50 = pd.DataFrame(ma_50) ma100 = pd.DataFrame(ma_100) ma200 = pd.DataFrame(ma_200) buy_sell = pd.DataFrame(buy_sell) clf = tree.DecisionTreeRegressor() x = np.concatenate([close, ma50, ma100, ma200], axis=1) y = buy_sell clf.fit(x, y) close_buy1 = close[:-1] m5 = ma_50[:-1] m10 = ma_100[:-1] ma20 = ma_200[:-1] b = np.concatenate([close_buy1, m5, m10, ma20], axis=1) clf.predict([close_buy1, m5, m10, ma20]) The error which this gives is: ValueError: cannot copy sequence with size 821 to array axis with dimension `7` I tried to do everything i know but it really did not work out. A: corrected, runnable code <code> from sklearn import tree import pandas as pd import pandas_datareader as web import numpy as np df = web.DataReader('goog', 'yahoo', start='2012-5-1', end='2016-5-20') df['B/S'] = (df['Close'].diff() < 0).astype(int) closing = (df.loc['2013-02-15':'2016-05-21']) ma_50 = (df.loc['2013-02-15':'2016-05-21']) ma_100 = (df.loc['2013-02-15':'2016-05-21']) ma_200 = (df.loc['2013-02-15':'2016-05-21']) buy_sell = (df.loc['2013-02-15':'2016-05-21']) # Fixed close = pd.DataFrame(closing) ma50 = pd.DataFrame(ma_50) ma100 = pd.DataFrame(ma_100) ma200 = pd.DataFrame(ma_200) buy_sell = pd.DataFrame(buy_sell) clf = tree.DecisionTreeRegressor() x = np.concatenate([close, ma50, ma100, ma200], axis=1) y = buy_sell clf.fit(x, y) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(predict) </code> SOLUTION: close_buy1 = close[:-1] m5 = ma_50[:-1] m10 = ma_100[:-1] ma20 = ma_200[:-1] # b = np.concatenate([close_buy1, m5, m10, ma20], axis=1) predict = clf.predict(pd.concat([close_buy1, m5, m10, ma20], axis=1))
INSTRUCTION: Problem: I'm trying to iterate code for a linear regression over all columns, upwards of Z3. Here is a snippet of the dataframe called df1 Time A1 A2 A3 B1 B2 B3 1 5.00 NaN NaN NaN NaN 7.40 7.51 2 5.50 7.44 7.63 7.58 7.54 NaN NaN 3 6.00 7.62 7.86 7.71 NaN NaN NaN This code returns the slope coefficient of a linear regression for the very ONE column only and concatenates the value to a numpy series called series, here is what it looks like for extracting the slope for the first column: series = np.array([]) df2 = df1[~np.isnan(df1['A1'])] df3 = df2[['Time','A1']] npMatrix = np.matrix(df3) X, Y = npMatrix[:,0], npMatrix[:,1] slope = LinearRegression().fit(X,Y) m = slope.coef_[0] series= np.concatenate((SGR_trips, m), axis = 0) As it stands now, I am using this slice of code, replacing "A1" with a new column name all the way up to "Z3" and this is extremely inefficient. I know there are many easy way to do this with some modules, but I have the drawback of having all these intermediate NaN values in the timeseries. So it seems like I'm limited to this method, or something like it. I tried using a for loop such as: for col in df1.columns: and replacing 'A1', for example with col in the code, but this does not seem to be working. Anyone can give me any ideas? Save the answers in a 1d array/list A: <code> import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression df1 = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(slopes) </code> SOLUTION: slopes = [] for col in df1.columns: if col == "Time": continue mask = ~np.isnan(df1[col]) x = np.atleast_2d(df1.Time[mask].values).T y = np.atleast_2d(df1[col][mask].values).T reg = LinearRegression().fit(x, y) slopes.append(reg.coef_[0]) slopes = np.array(slopes).reshape(-1)
INSTRUCTION: Problem: How can I perform regression in sklearn, using SVM and a polynomial kernel (degree=2)? Note to use default arguments. Thanks. A: <code> import numpy as np import pandas as pd import sklearn X, y = load_data() assert type(X) == np.ndarray assert type(y) == np.ndarray # fit, then predict X </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(predict) </code> SOLUTION: from sklearn.svm import SVR svr_poly = SVR(kernel='poly', degree=2) svr_poly.fit(X, y) predict = svr_poly.predict(X)
INSTRUCTION: Problem: Here is some code example. To better understand it, I'm trying to train models with GradientBoostingClassifier with categorical variables as input. from sklearn import datasets from sklearn.ensemble import GradientBoostingClassifier import pandas iris = datasets.load_iris() X = iris.data[(iris.target==0) | (iris.target==1)] Y = iris.target[(iris.target==0) | (iris.target==1)] train_indices = list(range(40)) + list(range(50,90)) test_indices = list(range(40,50)) + list(range(90,100)) X_train = X[train_indices] X_test = X[test_indices] y_train = Y[train_indices] y_test = Y[test_indices] X_train = pandas.DataFrame(X_train) X_train[0] = ['a']*40 + ['b']*40 clf = GradientBoostingClassifier(learning_rate=0.01,max_depth=8,n_estimators=50).fit(X_train, y_train) This piece of code report error like: ValueError: could not convert string to float: 'b' I find it seems that One Hot Encoding on categorical variables is required before GradientBoostingClassifier. But can GradientBoostingClassifier build models using categorical variables without one hot encoding? I want to convert categorical variable to matrix and merge back with original training data use get_dummies in pandas. Could you give me some help how to use this function to handle this? A: <code> import numpy as np import pandas as pd from sklearn import datasets from sklearn.ensemble import GradientBoostingClassifier import pandas # load data in the example X_train, y_train = load_data() X_train[0] = ['a'] * 40 + ['b'] * 40 </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> clf = GradientBoostingClassifier(learning_rate=0.01, max_depth=8, n_estimators=50).fit(X_train, y_train) </code> SOLUTION: catVar = pd.get_dummies(X_train[0]).to_numpy() X_train = np.concatenate((X_train.iloc[:, 1:], catVar), axis=1)
INSTRUCTION: Problem: I have a csv file without headers which I'm importing into python using pandas. The last column is the target class, while the rest of the columns are pixel values for images. How can I go ahead and split this dataset into a training set and a testing set (80/20)? Also, once that is done how would I also split each of those sets so that I can define x (all columns except the last one), and y (the last column)? I've imported my file using: dataset = pd.read_csv('example.csv', header=None, sep=',') Thanks A: use random_state=42 <code> import numpy as np import pandas as pd dataset = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(x_train) print(y_train) print(x_test) print(y_test) </code> SOLUTION: from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.2, random_state=42)
INSTRUCTION: Problem: I am using KMeans in sklearn on a data set which have more than 5000 samples. And I want to get the 50 samples(not just index but full data) closest to "p" (e.g. p=2), a cluster center, as an output, here "p" means the p^th center. Anyone can help me? A: <code> import numpy as np import pandas as pd from sklearn.cluster import KMeans p, X = load_data() assert type(X) == np.ndarray km = KMeans() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(closest_50_samples) </code> SOLUTION: km.fit(X) d = km.transform(X)[:, p] indexes = np.argsort(d)[::][:50] closest_50_samples = X[indexes]
INSTRUCTION: Problem: I have a dataframe whose last column is the target and the rest of the columns are the features. Now, how can I split this dataframe dataset into a training set(80%) and a testing set(20%)? Also, how should I meanwhile split each of those sets, so I can define x (all columns except the last one), and y (the last column)? Anyone would like to help me will be great appreciated. A: use random_state=42 <code> import numpy as np import pandas as pd data = load_data() </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(x_train) print(y_train) print(x_test) print(y_test) </code> SOLUTION: from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(data.iloc[:, :-1], data.iloc[:, -1], test_size=0.2, random_state=42)
INSTRUCTION: Problem: Given a list of variant length features: features = [ ['f1', 'f2', 'f3'], ['f2', 'f4', 'f5', 'f6'], ['f1', 'f2'] ] where each sample has variant number of features and the feature dtype is str and already one hot. In order to use feature selection utilities of sklearn, I have to convert the features to a 2D-array which looks like: f1 f2 f3 f4 f5 f6 s1 1 1 1 0 0 0 s2 0 1 0 1 1 1 s3 1 1 0 0 0 0 How could I achieve it via sklearn or numpy? A: <code> import pandas as pd import numpy as np import sklearn features = load_data() def solve(features): </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> return new_features new_features = solve(features) print(new_features) </code> SOLUTION: # def solve(features): ### BEGIN SOLUTION from sklearn.preprocessing import MultiLabelBinarizer new_features = MultiLabelBinarizer().fit_transform(features) ### END SOLUTION # return new_features # new_features = solve(features)
INSTRUCTION: Problem: I have set up a GridSearchCV and have a set of parameters, with I will find the best combination of parameters. My GridSearch consists of 12 candidate models total. However, I am also interested in seeing the accuracy score of all of the 12, not just the best score, as I can clearly see by using the .best_score_ method. I am curious about opening up the black box that GridSearch sometimes feels like. I see a scoring= argument to GridSearch, but I can't see any way to print out scores. Actually, I want the full results of GridSearchCV besides getting the score, in pandas dataframe. Any advice is appreciated. Thanks in advance. A: <code> import numpy as np import pandas as pd from sklearn.model_selection import GridSearchCV GridSearch_fitted = load_data() assert type(GridSearch_fitted) == sklearn.model_selection._search.GridSearchCV </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(full_results) </code> SOLUTION: full_results = pd.DataFrame(GridSearch_fitted.cv_results_)