#StackBounty: #python #google-chrome #anaconda How to make chrome compatible with Jupyter for Anaconda?

Bounty: 50

I am an anaconda user and Jupyter is a neat tool to run python code. However, for my macbook, I can’t open it in Chrome (This page isn’t working
localhost didn’t send any data.),but it works in Safari, I have tried to reinstall chrome, but I still can’t fix it. My system is Mac OS 10.11.5.
Who knows how I can fix it?
I can understand that the problem might be not specific enough, but I have been puzzled by this problem for quite a period of time.


Get this bounty!!!

#StackBounty: #python #amazon-s3 #airflow setting up s3 for logs in airflow

Bounty: 50

I am using docker-compose to set up a scalable airflow cluster. I based my approach off of this Dockerfile https://hub.docker.com/r/puckel/docker-airflow/

My problem is getting the logs set up to write/read from s3. When a dag has completed I get an error like this

*** Log file isn't local.
*** Fetching here: http://ea43d4d49f35:8793/log/xxxxxxx/2017-06-26T11:00:00
*** Failed to fetch log file from worker.

*** Reading remote logs...
Could not read logs from s3://buckets/xxxxxxx/airflow/logs/xxxxxxx/2017-06-
26T11:00:00

I set up a new section in the airflow.cfg file like this

[MyS3Conn]
aws_access_key_id = xxxxxxx
aws_secret_access_key = xxxxxxx
aws_default_region = xxxxxxx

And then specified the s3 path in the remote logs section in airflow.cfg

remote_base_log_folder = s3://buckets/xxxx/airflow/logs
remote_log_conn_id = MyS3Conn

Did I set this up properly and there is a bug? Is there a recipe for success here that I am missing?

update

I tried exporting in URI and JSON formats and neither seemed to work. I then exported the aws_access_key_id and aws_secret_access_key and then airflow started picking it up. Now I get his error in the worker logs

6/30/2017 6:05:59 PMINFO:root:Using connection to: s3
6/30/2017 6:06:00 PMERROR:root:Could not read logs from s3://buckets/xxxxxx/airflow/logs/xxxxx/2017-06-30T23:45:00
6/30/2017 6:06:00 PMERROR:root:Could not write logs to s3://buckets/xxxxxx/airflow/logs/xxxxx/2017-06-30T23:45:00
6/30/2017 6:06:00 PMLogging into: /usr/local/airflow/logs/xxxxx/2017-06-30T23:45:00


Get this bounty!!!

#StackBounty: #python #machine-learning #neural-network #tensorflow Low accuracy of LSTM model tensorflow

Bounty: 50

I am trying to learn LSTM model for sentiment analysis using Tensorflow, I have gone through the LSTM model.

Following code (create_sentiment_featuresets.py) generates the lexicon from 5000 positive sentences and 5000 negative sentences.

import nltk
from nltk.tokenize import word_tokenize
import numpy as np
import random
from collections import Counter
from nltk.stem import WordNetLemmatizer

lemmatizer = WordNetLemmatizer()

def create_lexicon(pos, neg):
    lexicon = []
    with open(pos, 'r') as f:
        contents = f.readlines()
        for l in contents[:len(contents)]:
            l= l.decode('utf-8')
            all_words = word_tokenize(l)
            lexicon += list(all_words)
    f.close()

    with open(neg, 'r') as f:
        contents = f.readlines()    
        for l in contents[:len(contents)]:
            l= l.decode('utf-8')
            all_words = word_tokenize(l)
            lexicon += list(all_words)
    f.close()

    lexicon = [lemmatizer.lemmatize(i) for i in lexicon]
    w_counts = Counter(lexicon)
    l2 = []
    for w in w_counts:
        if 1000 > w_counts[w] > 50:
            l2.append(w)
    print("Lexicon length create_lexicon: ",len(lexicon))
    return l2

def sample_handling(sample, lexicon, classification):
    featureset = []
    print("Lexicon length Sample handling: ",len(lexicon))
    with open(sample, 'r') as f:
        contents = f.readlines()
        for l in contents[:len(contents)]:
            l= l.decode('utf-8')
            current_words = word_tokenize(l.lower())
            current_words= [lemmatizer.lemmatize(i) for i in current_words]
            features = np.zeros(len(lexicon))
            for word in current_words:
                if word.lower() in lexicon:
                    index_value = lexicon.index(word.lower())
                    features[index_value] +=1
            features = list(features)
            featureset.append([features, classification])
    f.close()
    print("Feature SET------")
    print(len(featureset))
    return featureset

def create_feature_sets_and_labels(pos, neg, test_size = 0.1):
    global m_lexicon
    m_lexicon = create_lexicon(pos, neg)
    features = []
    features += sample_handling(pos, m_lexicon, [1,0])
    features += sample_handling(neg, m_lexicon, [0,1])
    random.shuffle(features)
    features = np.array(features)

    testing_size = int(test_size * len(features))

    train_x = list(features[:,0][:-testing_size])
    train_y = list(features[:,1][:-testing_size])
    test_x = list(features[:,0][-testing_size:])
    test_y = list(features[:,1][-testing_size:])
    return train_x, train_y, test_x, test_y

def get_lexicon():
    global m_lexicon
    return m_lexicon

The following code (sentiment_analysis.py) is for sentiment analysis using simple neural network model and is working fine

from create_sentiment_featuresets import create_feature_sets_and_labels
from create_sentiment_featuresets import get_lexicon
import tensorflow as tf
import numpy as np
# extras for testing
from nltk.tokenize import word_tokenize 
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
#- end extras

train_x, train_y, test_x, test_y = create_feature_sets_and_labels('pos.txt', 'neg.txt')


# pt A-------------

n_nodes_hl1 = 1500
n_nodes_hl2 = 1500
n_nodes_hl3 = 1500

n_classes = 2
batch_size = 100
hm_epochs = 10

x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)

hidden_1_layer = {'f_fum': n_nodes_hl1,
                'weight': tf.Variable(tf.random_normal([len(train_x[0]), n_nodes_hl1])),
                'bias': tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'f_fum': n_nodes_hl2,
                'weight': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                'bias': tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'f_fum': n_nodes_hl3,
                'weight': tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
                'bias': tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'f_fum': None,
                'weight': tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
                'bias': tf.Variable(tf.random_normal([n_classes]))}


def nueral_network_model(data):
    l1 = tf.add(tf.matmul(data, hidden_1_layer['weight']), hidden_1_layer['bias'])
    l1 = tf.nn.relu(l1)
    l2 = tf.add(tf.matmul(l1, hidden_2_layer['weight']), hidden_2_layer['bias'])
    l2 = tf.nn.relu(l2)
    l3 = tf.add(tf.matmul(l2, hidden_3_layer['weight']), hidden_3_layer['bias'])
    l3 = tf.nn.relu(l3)
    output = tf.matmul(l3, output_layer['weight']) + output_layer['bias']
    return output

# pt B--------------

def train_neural_network(x):
    prediction = nueral_network_model(x)
    cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits= prediction, labels= y))
    optimizer = tf.train.AdamOptimizer(learning_rate= 0.001).minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch in range(hm_epochs):
            epoch_loss = 0
            i = 0
            while i < len(train_x):
                start = i
                end = i+ batch_size
                batch_x = np.array(train_x[start: end])
                batch_y = np.array(train_y[start: end])
                _, c = sess.run([optimizer, cost], feed_dict= {x: batch_x, y: batch_y})
                epoch_loss += c
                i+= batch_size
            print('Epoch', epoch+ 1, 'completed out of ', hm_epochs, 'loss:', epoch_loss)

        correct= tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
        print('Accuracy:', accuracy.eval({x:test_x, y:test_y}))

        # testing --------------
        m_lexicon= get_lexicon()
        print('Lexicon length: ',len(m_lexicon))        
        input_data= "David likes to go out with Kary"       
        current_words= word_tokenize(input_data.lower())
        current_words = [lemmatizer.lemmatize(i) for i in current_words]
        features = np.zeros(len(m_lexicon))
        for word in current_words:
            if word.lower() in m_lexicon:
                index_value = m_lexicon.index(word.lower())
                features[index_value] +=1

        features = np.array(list(features)).reshape(1,-1)
        print('features length: ',len(features))
        result = sess.run(tf.argmax(prediction.eval(feed_dict={x:features}), 1))
        print(prediction.eval(feed_dict={x:features}))
        if result[0] == 0:
            print('Positive: ', input_data)
        elif result[0] == 1:
            print('Negative: ', input_data)

train_neural_network(x)

I have modified the above (sentiment_analysis.py) for LSTM model
after reading the RNN w/ LSTM cell example in TensorFlow and Python which is for LSTM on mnist image dataset:

Some how through many hit and run trails, I was able to get the below running code (sentiment_demo_lstm.py) :

import tensorflow as tf
from tensorflow.contrib import rnn
from create_sentiment_featuresets import create_feature_sets_and_labels
from create_sentiment_featuresets import get_lexicon

import numpy as np

# extras for testing
from nltk.tokenize import word_tokenize 
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
#- end extras

train_x, train_y, test_x, test_y = create_feature_sets_and_labels('pos.txt', 'neg.txt')

n_steps= 100
input_vec_size= len(train_x[0])
hm_epochs = 8
n_classes = 2
batch_size = 128
n_hidden = 128

x = tf.placeholder('float', [None, input_vec_size, 1])
y = tf.placeholder('float')

def recurrent_neural_network(x):
    layer = {'weights': tf.Variable(tf.random_normal([n_hidden, n_classes])),   # hidden_layer, n_classes
            'biases': tf.Variable(tf.random_normal([n_classes]))}

    h_layer = {'weights': tf.Variable(tf.random_normal([1, n_hidden])), # hidden_layer, n_classes
            'biases': tf.Variable(tf.random_normal([n_hidden], mean = 1.0))}

    x = tf.transpose(x, [1,0,2])
    x = tf.reshape(x, [-1, 1])
    x = tf.split(x, input_vec_size, 0)

    lstm_cell = rnn.BasicLSTMCell(n_hidden, state_is_tuple=True)
    outputs, states = rnn.static_rnn(lstm_cell, x, dtype= tf.float32)
    output = tf.matmul(outputs[-1], layer['weights']) + layer['biases']

    return output

def train_neural_network(x):
    prediction = recurrent_neural_network(x)
    cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits= prediction, labels= y))
    optimizer = tf.train.AdamOptimizer(learning_rate= 0.001).minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in range(hm_epochs):
            epoch_loss = 0
            i = 0
            while (i+ batch_size) < len(train_x):
                start = i
                end = i+ batch_size
                batch_x = np.array(train_x[start: end])
                batch_y = np.array(train_y[start: end])
                batch_x = batch_x.reshape(batch_size ,input_vec_size, 1)
                _, c = sess.run([optimizer, cost], feed_dict= {x: batch_x, y: batch_y})
                epoch_loss += c
                i+= batch_size
            print('--------Epoch', epoch+ 1, 'completed out of ', hm_epochs, 'loss:', epoch_loss)

        correct= tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))

        print('Accuracy:', accuracy.eval({x:np.array(test_x).reshape(-1, input_vec_size, 1), y:test_y}))

        # testing --------------
        m_lexicon= get_lexicon()
        print('Lexicon length: ',len(m_lexicon))
        input_data= "Mary does not like pizza"  #"he seems to to be healthy today"  #"David likes to go out with Kary"

        current_words= word_tokenize(input_data.lower())
        current_words = [lemmatizer.lemmatize(i) for i in current_words]
        features = np.zeros(len(m_lexicon))
        for word in current_words:
            if word.lower() in m_lexicon:
                index_value = m_lexicon.index(word.lower())
                features[index_value] +=1
        features = np.array(list(features)).reshape(-1, input_vec_size, 1)
        print('features length: ',len(features))

        result = sess.run(tf.argmax(prediction.eval(feed_dict={x:features}), 1))
        print('RESULT: ', result)
        print(prediction.eval(feed_dict={x:features}))
        if result[0] == 0:
            print('Positive: ', input_data)
        elif result[0] == 1:
            print('Negative: ', input_data)

train_neural_network(x)

Output of

print(train_x[0])
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

print(train_y[0])
[0, 1]

len(train_x)= 9596, len(train_x[0]) = 423 meaning train_x is a list of 9596×423 ?

The above code is running. BUT I am not able to get the accuracy above 50 percent. Its always between 45-50 %

  1. Is there anything wrong in my implementation?

  2. How should I approach to improve the accuracy apart from collecting larger database?

I am new in this field, please help.
Thanks.


Get this bounty!!!

#StackBounty: #python #python-2.7 #random #steganography Python PRNG_Steganography lsb method

Bounty: 50

This is my implementation of a PRNG_Steganography tool for Python. You can also find the code on GitHub.

Imports

from PIL import Image
import numpy as np
import sys
import os
import getopt
import base64
import random
from cryptography.fernet import Fernet
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.backends import default_backend


# Set location of directory we are working in to load/save files
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))

Encryption and decryption of the text methods via the Cryptography module

def get_key(password):
    digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
    digest.update(password)
    return base64.urlsafe_b64encode(digest.finalize())


def encrypt_text(password, token):
    f = Fernet(get_key(password))
    return f.encrypt(bytes(token))


def decrypt_text(password, token):
    f = Fernet(get_key(password))
    return f.decrypt(bytes(token))

Main encryption method

def encrypt(filename, text, magic):
    # check whether the text is a file name
    if len(text.split('.')[1:]):
        text = read_files(os.path.join(__location__, text))
    t = [int(x) for x in ''.join(text_ascii(encrypt_text(magic, text)))] + [0]*7  # endbit
    try:
        # Change format to png
        filename = change_image_form(filename)

        # Load Image
        d_old = load_image(filename)

        # Check if image can contain the data
        if d_old.size < len(t):
            print '[*] Image not big enough'
            sys.exit(0)

        # get new data and save to image
        d_new = encrypt_lsb(d_old, magic, t)
        save_image(d_new, 'new_'+filename)
    except Exception, e:
        print str(e)

Main decryption method

def decrypt(filename, magic):
    try:
        # Load image
        data = load_image(filename)

        # Retrieve text
        text = decrypt_lsb(data, magic)
        print '[*] Retrieved text: n%s' % decrypt_text(magic, text)
    except Exception, e:
        print str(e)

Random methods

def text_ascii(text):
    return map(lambda char: '{:07b}'.format(ord(char)), text)


def ascii_text(byte_char):
    return chr(int(byte_char, 2))


def next_random(random_list, data):
    next_random_number = random.randint(0, data.size-1)
    while next_random_number in random_list:
        next_random_number = random.randint(0, data.size-1)
    return next_random_number


def generate_seed(magic):
    seed = 1
    for char in magic:
        seed *= ord(char)
    print '[*] Your magic number is %d' % seed
    return seed

Encrypt via lsb_method

def encrypt_lsb(data, magic, text):
    print '[*] Starting Encryption'

    # We must alter the seed but for now lets make it simple
    random.seed(generate_seed(magic))

    random_list = []
    for i in range(len(text)):
        next_random_number = next_random(random_list, data)
        random_list.append(next_random_number)
        data.flat[next_random_number] = (data.flat[next_random_number] & ~1) | text[i]

    print '[*] Finished Encryption'
    return data

Decrypt via lsb_method

def decrypt_lsb(data, magic):
    print '[*] Starting Decryption'
    random.seed(generate_seed(magic))

    random_list = []
    output = temp_char = ''

    for i in range(data.size):
        next_random_number = next_random(random_list, data)
        random_list.append(next_random_number)
        temp_char += str(data.flat[next_random_number] & 1)
        if len(temp_char) == 7:
            if int(temp_char) > 0:
                output += ascii_text(temp_char)
                temp_char = ''
            else:
                print '[*] Finished Decryption'
                return output

Image handling methods

def load_image(filename):
    img = Image.open(os.path.join(__location__, filename))
    img.load()
    data = np.asarray(img, dtype="int32")
    return data


def save_image(npdata, outfilename):
    img = Image.fromarray(np.asarray(np.clip(npdata, 0, 255), dtype="uint8"), "RGB")
    img.save(os.path.join(__location__, outfilename))


def change_image_form(filename):
    f = filename.split('.')
    if not (f[-1] == 'bmp' or f[-1] == 'BMP' or f[-1] == 'PNG' or f[-1] == 'png'):
        img = Image.open(os.path.join(__location__, filename))
        img.load()
        filename = ''.join(f[:-1]) + '.png'
        img.save(os.path.join(__location__, filename))
    return filename


def read_files(filename):
    if os.path.exists(filename):
        with open(filename, 'r') as f:
            return ''.join([i for i in f])
    return filename.replace(__location__, '')[1:]

Usage and main

def usage():
    print "Steganography prng-Tool @Ludisposed & @Qin"
    print ""
    print "Usage: prng_stego.py -e -m magic filename text "
    print "-e --encrypt              - encrypt filename with text"
    print "-d --decrypt              - decrypt filename"
    print "-m --magic                - encrypt/decrypt with password"
    print ""
    print ""
    print "Examples: "
    print "prng_stego.py -e -m pass test.png howareyou"
    print 'python prng_stego.py -e -m magic test.png tester.sh'
    print 'python prng_stego.py -e -m magic test.png file_test.txt'
    print 'prng_stego.py --encrypt --magic password test.png "howareyou  some other text"'
    print ''
    print "prng_stego.py -d -m password test.png"
    print "prng_stego.py --decrypt --magic password test.png"
    sys.exit(0)

if __name__ == "__main__":
    if not len(sys.argv[1:]):
        usage()
    try:
        opts, args = getopt.getopt(sys.argv[1:], "hedm:", ["help", "encrypt", "decrypt", "magic="])
    except getopt.GetoptError as err:
        print str(err)
        usage()

    magic = to_encrypt = None
    for o, a in opts:
        if o in ("-h", "--help"):
            usage()
        elif o in ("-e", "--encrypt"):
            to_encrypt = True
        elif o in ("-d", "--decrypt"):
            to_encrypt = False
        elif o in ("-m", "--magic"):
            magic = a
        else:
            assert False, "Unhandled Option"

    if magic is None or to_encrypt is None:
        usage()

    if not to_encrypt:
        filename = args[0]
        decrypt(filename, magic)
    else:
        filename = args[0]
        text = args[1]
        encrypt(filename, text, magic) 

Any general Coding tips are welcome!

I would also love some tips on how the use the magic to create a wierd seed.

It works by seeding the random module and getting the next unique random integer with that seed. At decryption it knows when to stop looking for bits if it finds the endbit [0]*7

Furthermore I’m interested in how Random this is? I think this is harder to decrypt then just normal lsb_stego, but can not prove anything.

Kind Regards: Ludisposed


Get this bounty!!!

#StackBounty: #command-line #python #symbolic-link #opencv #anaconda How to import Anaconda's modules in system's python?

Bounty: 50

I’ve installed opencv for Anaconda with this command:

conda install opencv

And when I run python3.6 in terminal , I can import cv2 module with any issue.

So opencv has been installed for Anaconda’s path successfully.

Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:09:58) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2

But when I import cv2 in python3.5:

Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'

So I can’t use opencv in system’s python.

And here is the question; how to can I import Anaconda’s modules (especially cv2) in system’s python?

How to create symbolic link from Anaconda’s modules to system’s python path?


Get this bounty!!!

#StackBounty: #command-line #python #symbolic-link #opencv #anaconda How to import Anaconda's modules in system's python?

Bounty: 50

I’ve installed opencv for Anaconda with this command:

conda install opencv

And when I run python3.6 in terminal , I can import cv2 module with any issue.

So opencv has been installed for Anaconda’s path successfully.

Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:09:58) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2

But when I import cv2 in python3.5:

Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'

So I can’t use opencv in system’s python.

And here is the question; how to can I import Anaconda’s modules (especially cv2) in system’s python?

How to create symbolic link from Anaconda’s modules to system’s python path?


Get this bounty!!!

#StackBounty: #command-line #python #symbolic-link #opencv #anaconda How to import Anaconda's modules in system's python?

Bounty: 50

I’ve installed opencv for Anaconda with this command:

conda install opencv

And when I run python3.6 in terminal , I can import cv2 module with any issue.

So opencv has been installed for Anaconda’s path successfully.

Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:09:58) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2

But when I import cv2 in python3.5:

Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'

So I can’t use opencv in system’s python.

And here is the question; how to can I import Anaconda’s modules (especially cv2) in system’s python?

How to create symbolic link from Anaconda’s modules to system’s python path?


Get this bounty!!!

#StackBounty: #machine-learning #distributions #python #scikit-learn #transfer-learning How to identify the subset of a large dataset t…

Bounty: 50

I am trying to implement transfer learning to make predictions on test set using a model trained on another dataset collected from a different sample.

For this, I need to identify the subset of training data that has a similar distribution with the test data.

I wonder if there are any methods to achieve this in sklearn or python. Any suggestions or ideas?


Get this bounty!!!

#StackBounty: #command-line #python #symbolic-link #opencv #anaconda How to import Anaconda's modules in system's python?

Bounty: 50

I’ve installed opencv for Anaconda with this command:

conda install opencv

And when I run python3.6 in terminal , I can import cv2 module with any issue.

So opencv has been installed for Anaconda’s path successfully.

Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:09:58) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2

But when I import cv2 in python3.5:

Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'

So I can’t use opencv in system’s python.

And here is the question; how to can I import Anaconda’s modules (especially cv2) in system’s python?

How to create symbolic link from Anaconda’s modules to system’s python path?


Get this bounty!!!

#StackBounty: #command-line #python #symbolic-link #opencv #anaconda How to import Anaconda's modules in system's python?

Bounty: 50

I’ve installed opencv for Anaconda with this command:

conda install opencv

And when I run python3.6 in terminal , I can import cv2 module with any issue.

So opencv has been installed for Anaconda’s path successfully.

Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:09:58) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2

But when I import cv2 in python3.5:

Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'

So I can’t use opencv in system’s python.

And here is the question; how to can I import Anaconda’s modules (especially cv2) in system’s python?

How to create symbolic link from Anaconda’s modules to system’s python path?


Get this bounty!!!