#StackBounty: #python #tensorflow #machine-learning #neural-network #artificial-intelligence How to control if input features contribut…

Bounty: 100

I’m trying to make the most basic of basic neural networks to get familiar with functional API in Tensorflow 2.x.

Basically what I’m trying to do is the following with my simplified iris dataset (i.e. setosa or not)

  1. Use the 4 features as input
  2. Dense layer of 3
  3. Sigmoid activation function
  4. Dense layer of 2 (one for each class)
  5. Softmax activation
  6. Binary cross entropy / log-loss as my loss function

However, I can’t figure out how to control one key aspect of the model. That is, how can I ensure that each feature from my input layer contributes to only one neuron in my subsequent dense layer? Also, how can I allow a feature to contribute to more than one neuron?

This isn’t clear to me from the documentation.

# Load data
from sklearn.datasets import load_iris
import pandas as pd

iris = load_iris()
X, y = load_iris(return_X_y=True, as_frame=True)
X = X.astype("float32")
X.index = X.index.map(lambda i: "iris_{}".format(i))
X.columns = X.columns.map(lambda j: j.split(" (")[0].replace(" ","_"))
y.index = X.index
y = y.map(lambda i:iris.target_names[i])
y_simplified = y.map(lambda i: {True:1, False:0}[i == "setosa"])
y_simplified = pd.get_dummies(y_simplified, columns=["setosa", "not_setosa"])

# Traing test split
from sklearn.model_selection import train_test_split
X_train,X_test, y_train,y_test= train_test_split(X,y_simplified, test_size=0.3, random_state=seed)

# Simple neural network
import tensorflow as tf

# Input[4 features] -> Dense layer of 3 neurons -> Activation function -> Dense layer of 2 (one per class) -> Softmax
inputs = tf.keras.Input(shape=(4))
x = tf.keras.layers.Dense(3)(inputs)
x = tf.keras.layers.Activation(tf.nn.sigmoid)(x)
x = tf.keras.layers.Dense(2)(x)
outputs = tf.keras.layers.Activation(tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name="simple_binary_iris")
model.compile(loss="binary_crossentropy", metrics=["accuracy"] )

history = model.fit(X_train, y_train, batch_size=64, epochs=10, validation_split=0.2)

test_scores = model.evaluate(X_test, y_test)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])


Model: "simple_binary_iris"
Layer (type)                 Output Shape              Param #   
input_44 (InputLayer)        [(None, 4)]               0         
dense_96 (Dense)             (None, 3)                 15        
activation_70 (Activation)   (None, 3)                 0         
dense_97 (Dense)             (None, 2)                 8         
activation_71 (Activation)   (None, 2)                 0         
Total params: 23
Trainable params: 23
Non-trainable params: 0
Epoch 1/10
2/2 [==============================] - 0s 40ms/step - loss: 0.6344 - accuracy: 0.6667 - val_loss: 0.6107 - val_accuracy: 0.7143
Epoch 2/10
2/2 [==============================] - 0s 6ms/step - loss: 0.6302 - accuracy: 0.6667 - val_loss: 0.6083 - val_accuracy: 0.7143
Epoch 3/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6278 - accuracy: 0.6667 - val_loss: 0.6056 - val_accuracy: 0.7143
Epoch 4/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6257 - accuracy: 0.6667 - val_loss: 0.6038 - val_accuracy: 0.7143
Epoch 5/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6239 - accuracy: 0.6667 - val_loss: 0.6014 - val_accuracy: 0.7143
Epoch 6/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6223 - accuracy: 0.6667 - val_loss: 0.6002 - val_accuracy: 0.7143
Epoch 7/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6209 - accuracy: 0.6667 - val_loss: 0.5989 - val_accuracy: 0.7143
Epoch 8/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6195 - accuracy: 0.6667 - val_loss: 0.5967 - val_accuracy: 0.7143
Epoch 9/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6179 - accuracy: 0.6667 - val_loss: 0.5953 - val_accuracy: 0.7143
Epoch 10/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6166 - accuracy: 0.6667 - val_loss: 0.5935 - val_accuracy: 0.7143
2/2 [==============================] - 0s 607us/step - loss: 0.6261 - accuracy: 0.6444
Test loss: 0.6261375546455383
Test accuracy: 0.644444465637207

Get this bounty!!!

#StackBounty: #python #python-3.x #machine-learning #autocomplete #artificial-intelligence Is there a way to reset Kite Autocomplete&#3…

Bounty: 50

I’m wondering if I can reset the training of Kite’s AI that uses my code. I want to do this because I want to change my code style and there is some stuff that I quit doing.

Take xrange for example; it’s deprecated in python3 (I’m a python coder). So, I want to reset all of the data it learned from me as if I just got it again. I don’t want to uninstall and reinstall it.

Is uninstalling the Sublime Text/Atom plugins and reinstalling them would do the trick? Or is it not possible?

And for the specs, I got a MacOS Catalina (10.15.5 (19F96)), non-pro and no account for Kite, and Kite version 0.20200609.2.

I really want to know if there’s an official way, not some file removing magic.

But if some file removing magic is necessary, then I’m fine.

Also, I wonder if just removing and reinstalling the plugins for editors would do the trick…

Someone set a bounty on this; I don’t wanna.

Get this bounty!!!

#StackBounty: #algorithms #graphs #artificial-intelligence Use AI to analyze undirected weight graph and generate two sub graphs

Bounty: 50

I have a weighted undirected graph G0, from which I want to derive weigthed undirected graphs G1 and G2.

  • All nodes in G0 have a degree of at least one.
  • G1 and G2 can contain the same node but not the same edge.
  • All nodes in both G1 and G2 should have a degree of at least one.
  • All edges from G0 should be contained in either G1 or G2
  • Both G1 and G2 are subgraphs of G0 and cannot contain additional nodes.

Let’s assume that the sum of all the edges in G0 = x.

I need to find the pair of G1 and G2 for which the sum of all edges in G1 is closest to x/2

How would you go about solving that?

Edit: After much hassle due to my lack of explanatory powers, here is a better explanation of the problem (this is for an unweighted graph, but it will at least point me in the general direction of a suitable algorithm, also it avoids technical terms because is obvious that I struggle understanding them at the moment)

Lets assume that we have an undirected Graph with Vertices A,B,C,D,E

A --- B 
|   / | 
|  /  |  C
| /   | / 
D --- E 

Which means an adjacency matrix like this one:

  A  B  C  D  E
A -  1  0  1  0
B 1  -  1  1  1
C 0  1  -  0  1
D 1  1  0  -  1
E 0  1  1  1  -

Assuming all edges are undirected, we have the following edges:

(a,b) (a,d) (b,d) (b,e) (b,c) (e,d) (e,c)

What I want is that through AI, I can take that graph and generate two groups of edges of roughly the same size. (In this particular case, since we have 7 edges, we would get a group of 3 edges and another one of 4 edges.

Group 1: (a,b) (a,d) (b,d) (e,d)
A --- B 
|   / 
|  /  
| /     
D --- E 

Group 2: (b,e) (b,c) (e,c)
|  C
| / 

Notice that all edges in each group are connected by at least one node, that is a requirement.

Another valid solution could be, for example:

Group 1: (d,a) (a,b) (b,c) (c,e)
A --- B 
|       C
|       / 
D     E 
Group 2: (d,b) (b,e) (e,d)
    / |
   /  |
  /   | 
D --- E 

And so on, as you see there are many valid solutions but I want to know if there’s an algorithm to find one.

Get this bounty!!!

#StackBounty: #android #ios #game-development #artificial-intelligence #augmented-reality Library for creating an augmented reality app…

Bounty: 100

I’m trying to build an application similar to Wanna Kicks. I’m looking for a recommendation for AR library that will enable me to display 3D objects on top of human legs.

I guess that the library I’m looking for will give me the ability to train the application on a predefined images/modules of the human leg, allow me to detect the angle/rotation & position of the leg and place the 3D object of top of the leg image.

I worked before with SDKs like Vuforia, ARCore & ARKit, and they doesn’t provide the ability to track the kind of trackers I’m talking about. They can detect markers of static 2d and 3d objects with static shapes and features.

I’m also aware that I mostly won’t find a library that will satisfy all the requirements I’m looking to fulfill. I’m also looking for options that will do the dirty work for me(training, detection), but in the same time they support that I can add the other missing requirements for it(detecting position & rotation).

What are my options for an augmented reality library or non-Augmented reality library but still can do the job?

I’m looking for a library which runs on Android, ios or both of them.

Get this bounty!!!

Distributed Evolutionary Algorithms in Python


DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelization mechanism such as multiprocessing and SCOOP.

DEAP includes the following features:

  • Genetic algorithm using any imaginable representation
    • List, Array, Set, Dictionary, Tree, Numpy Array, etc.
  • Genetic programing using prefix trees
    • Loosely typed, Strongly typed
    • Automatically defined functions
  • Evolution strategies (including CMA-ES)
  • Multi-objective optimisation (NSGA-II, SPEA2, MO-CMA-ES)
  • Co-evolution (cooperative and competitive) of multiple populations
  • Parallelization of the evaluations (and more)
  • Hall of Fame of the best individuals that lived in the population
  • Checkpoints that take snapshots of a system regularly
  • Benchmarks module containing most common test functions
  • Genealogy of an evolution (that is compatible with NetworkX)
  • Examples of alternative algorithms : Particle Swarm Optimization, Differential Evolution, Estimation of Distribution Algorithm

See the DEAP User’s Guide for DEAP documentation.


We encourage you to use easy_install or pip to install DEAP on your system. Other installation procedure like apt-get, yum, etc. usually provide an outdated version.

pip install deap

The latest version can be installed with

pip install git+https://github.com/DEAP/deap@master

If you wish to build from sources, download or clone the repository and type

python setup.py install



HackerRank: BotClean Partially Observable

Problem Statement

The game Bot Clean took place in a fully observable environment, i.e., the state of every cell was visible to the bot at all times. Let us consider a variation of it where the environment is partially observable. The bot has the same actuators and sensors. But the sensors visibility is confined to its 8 adjacent cells.

Input Format
The first line contains two space separated integers which indicate the current position of the bot. The board is indexed using Matrix Convention

5 lines follow, representing the grid. Each cell in the grid is represented by any of the following 4 characters:
‘b’ (ascii value 98) indicates the bot’s current position,
‘d’ (ascii value 100) indicates a dirty cell,
‘-‘ (ascii value 45) indicates a clean cell in the grid, and
‘o’ (ascii value 111) indicates the cell that is currently not visible.

Output Format
Output is the action that is taken by the bot in the current step. It can either be any of the movements in 4 directions or the action of cleaning the cell in which it is currently located. Hence the output formats are LEFT, RIGHT, UP, DOWN or CLEAN.

Sample Input

0 0

Sample Output


Complete the function next_move that takes in 3 parameters: posr and posc denote the co-ordinates of the bot’s current position, and board denotes the board state, and print the bot’s next move.

The goal is to clean all the dirty cells in as few moves as possible. Your score is (200 – #bot moves)/25. All bots in this challenge will be given the same input. CLEAN is also considered a move.

Education Links


Solution tester posted alongside the program for users to test their input as well.