#StackBounty: #python #.net #tensorflow #nodes #ml.net Correct pb file to move Tensorflow model into ML.NET

Bounty: 100

I have a TensorFlow model that I built (a 1D CNN that I would now like to implement into .NET). In order to do so I need to know the Input and Output nodes. When I uploaded the model on Netron I get a different graph depending on my save method and the only one that looks correct comes from an h5 upload. Here is the model.summary():

enter image description here

If I save the model as an h5 model.save("Mn_pb_model.h5") and load that into the Netron to graph it, everything looks correct:

enter image description here

However, ML.NET will not accept h5 format so it needs to be saved as a pb. In looking through samples of adopting TensorFlow in ML.NET, this sample shows a TensorFlow model that is saved in a similar format to the SavedModel format – recommended by TensorFlow (and also recommended by ML.NET here "Download an unfrozen [SavedModel format] …"). However when saving and loading the pb file into Netron I get this:

enter image description here

And zoomed in a little further (on the far right side),

enter image description here

As you can see, it looks nothing like it should. Additionally the input nodes and output nodes are not correct so it will not work for ML.NET (and I think something is wrong). I am using the recommended way from TensorFlow to determine the Input / Output nodes:

enter image description here

When I try to obtain a frozen graph and load it into Netron, at first it looks correct, but I don’t think that it is:

enter image description here

There are four reasons I do not think this is correct. 1) it is very different from the graph when it was uploaded as an h5 (which looks correct to me). 2) as you can see from earlier, I am using 1D convolutions throughout and this is showing that it goes to 2D (and remains that way). 3) this file size is 128MB whereas the one in the TensorFlow to ML.NET example is only 252KB. Even the Inception model is only 56MB. 4) if I load the Inception model in TensorFlow and save it as an h5, it looks the same as from the ML.NET resource, yet when I save it as a frozen graph it looks different. If I take the same model and save it in the recommended SavedModel format, it shows up all messed up in Netron. Take any model you want and save it in the recommended SavedModel format and you will see for yourself (I’ve tried it on a lot of different models).

Additionally in looking at the model.summary() of Inception with it’s graph, it is similar to its graph in the same way my model.summary() is to the h5 graph. It seems like there should be an easier way (and a correct way) to save a TensorFlow model so it can be used in ML.NET.


Get this bounty!!!

#StackBounty: #c# #.net #sharepoint #microsoft-graph-api SharePoint file size is different than source file after uploading

Bounty: 50

We are developing a WinForms (desktop) application in .Net framework 4.5.2 with C# language.

Using Microsoft.Graph library 1.21.0 and Micorosft.Graph.Core 1.19.0 version to copy files from windows local machine to SharePoint/OneDrive cloud storage.

I tried with Microsoft.Graph library 3.18.0 and Micorosft.Graph.Core 1.22.0 and .Net 4.6.2 framework but same issue.

  1. We are copy files less than 4 MB using following method

uploadedItem = await
MainDrive.Root.ItemWithPath(Uri.EscapeDataString(targetFilePath)).Content.Request().PutAsync(stream,
cancellationToken);

  1. Files larger than 4 MB are being copied using ChunkUpload
var session = await MainDrive.Root.ItemWithPath(targetFilePath).CreateUploadSession().Request().PostAsync(cancellationToken);
var provider = new ChunkedUploadProvider(session, graphClient, stream, OneDriveChunkSize);

var chunkRequests = provider.GetUploadChunkRequests();  

var trackedExceptions = new List<Exception>();

foreach (UploadChunkRequest request in chunkRequests)
{
   await CheckforBandwidthThrotelling(fileInfo.Name, fp, cancellationToken);
   UploadChunkResult result = await provider.GetChunkRequestResponseAsync(request, trackedExceptions);
   if (result.UploadSucceeded)
   {
    uploadedItem = result.ItemResponse;
   }
}

Issue: We are getting the file size larger than source after copying files to SharePoint. It works well in case of OneDrive personal using same api’s and method.
I found that it’s due to Metadata get added to file. We are not maintaining any multiple version of files on SharePoint.
This issue is mostly with office files (docs, xlsx and ppt) but not with txt files of any size.

The application differentiate the files mismatch on source and destination based upon timestamp and file size. As file found different size in next run, it copies the file again.

The same issue is reported on github

Some more description about issue

I am looking for a workaround to compare file size between source and destination to decide whether file need to copy again.


Get this bounty!!!

#StackBounty: #c# #.net #serialization #json.net #nullreferenceexception Debug JsonConvert.SerializeObject and object reference not set…

Bounty: 50

I cant seem to figure out why the method "sometimes" cause this null reference exception. I run the code on my localhost and it works fine when this appears for some users. When I restart the site it also goes away on the server.

My question is how can I debug this? Its taking a .NET object and trying to serialize it but I cant tell which property is causing this issue.

Method

 public static string ToJSON(this object o)
        {
            return JsonConvert.SerializeObject(o, new JsonSerializerSettings
            {
               ReferenceLoopHandling = ReferenceLoopHandling.Ignore
            });
        }

Exception

     at System.Text.StringBuilder.Append(Char value)
   at Newtonsoft.Json.JsonTextWriter.WriteEnd(JsonToken token)
   at Newtonsoft.Json.JsonWriter.AutoCompleteClose(JsonContainerType type)
   at Newtonsoft.Json.JsonWriter.WriteEndObject()
   at Newtonsoft.Json.JsonWriter.WriteEnd(JsonContainerType type)
   at Newtonsoft.Json.JsonWriter.AutoCompleteAll()
   at Newtonsoft.Json.JsonTextWriter.Close()
   at Newtonsoft.Json.JsonWriter.Dispose(Boolean disposing)
   at Newtonsoft.Json.JsonWriter.System.IDisposable.Dispose()
   at Newtonsoft.Json.JsonConvert.SerializeObjectInternal(Object value, Type type, JsonSerializer jsonSerializer)
   at Newtonsoft.Json.JsonConvert.SerializeObject(Object value, JsonSerializerSettings settings)
   at Tournaments.Common.Extensions.ObjectExtensions.ToJSON(Object o)

Try/Catch

I tried the below and the JavascriptSerializer throws the error A circular reference was detected while serializing an object of type 'Tournaments.Models.Events.EventModel'. However I dont know what object has this EventModel, which could be multiple.

<script type="text/javascript">
        @{ 

            string model = null;
            try
            {
                model = Model.Designer.ToJSON();
            }
            catch (Exception ex )
            {
                model = new System.Web.Script.Serialization.JavaScriptSerializer().Serialize(Model.Designer);
            }
            }
            app.viewModel.members.bracket.init(@Html.Raw(model));
    </script>


Get this bounty!!!

#StackBounty: #c# #.net #asp.net-core #rtl-sdr #iq Store data in ( .wav) format from RTL-SDR device using c#

Bounty: 50

I am able to connect with RTL-SDR using librtlsdr.dll and libusb-1.0.dll by using (https://github.com/nandortoth/rtlsdr-manager) wrapper in c# .netcore3.0 .

Started getting sampledata from device by set device frequency. i am getting data in List of IQ .

I need to store this data in .wav file .its very easy with chrome.usb.bulktransfer function. this function provide int8array,int32array,uint8array that is directly write able to .wav file .

i don’t have an idea how it could be done using c# from IQ array

any suggestions or code samples will be appreciated.


Get this bounty!!!

#StackBounty: #.net #docker #configuration #traefik #swarm Traefik won't route path

Bounty: 50

To test Traefik I have made an app like whoami shown in the Traefik’s getting stated documentation that responds with a friendly message to a GET HTTP request to ‘/’ and ‘/sub’ endpoints. I’ve verified that changing whoami to answer through different paths indeed does works but for some reason, Traefik won’t resolve my app even though I’ve configured it similarly as I did with whoami. I know the first thing that comes to mind is that if I’ve configured my app the same way as I did with whoami so the problem would be my app but curl does confirm that I can reach my app from Traefik’s container so it got me wondering if there’s something baked into Traefik that whoami app would work and mine wouldn’t. I know that’s a silly assumption but I don’t see what else my app needs to do besides to respond to an HTTP Get request. You can see the app and how I’m bringing everything up here, just need to see build.sh. Another problem that I did work around it is configuring Traefik in swarm mode which I had to create a Traefik image instead of passing the configuration as an argument. The main configuration shows in the code below and traefik_rp its just an image of traefik with a tom file to set it as sarmMode.

version: '3'

services:
  traefik:
    # The official v2 Traefik docker image
    image: traefik_rp
    # Enables the web UI and tells Traefik to listen to docker
    command: --api.insecure=true --providers.docker
    ports:
      # The HTTP port
      - "80:80"
      # The Web UI (enabled by --api.insecure=true)
      - "8080:8080"
    volumes:
      # So that Traefik can listen to the Docker events
      - /var/run/docker.sock:/var/run/docker.sock
      
  simple_app:
    image: simpleapp
    environment: 
      ASPNETCORE_ENVIRONMENT: Release
    labels:
      - "traefik.http.routers.simple_app_service.rule=Path(`/simpleapp`)"

  whoami:
    # A container that exposes an API to show its IP address
    image: containous/whoami
    labels:
      - "traefik.http.routers.whoami.rule=Path(`/`)"

Best regards.


Get this bounty!!!

#StackBounty: #.net #dispatcher #bitmapencoder Cannot create more Dispatcher. Runs out of resource?

Bounty: 50

In our application, we are using PngBitmapEncoder to encode and save PNG image in a separate threadtask. After few days of running the application we are seeing Dispatcher cannot be created from Encoder and throws error

Not enough storage is available to process the command

And has the below call stack

System.ComponentModel.Win32Exception (0x80004005): Not enough storage is available to process this command
   at MS.Win32.HwndWrapper..ctor(Int32 classStyle, Int32 style, Int32 exStyle, Int32 x, Int32 y, Int32 width, Int32 height, String name, IntPtr parent, HwndWrapperHook[] hooks)
   at System.Windows.Threading.Dispatcher..ctor()
   at System.Windows.Threading.DispatcherObject..ctor()
   at System.Windows.Media.Imaging.BitmapEncoder..ctor(Boolean isBuiltIn)

As .Net is available open source, got curious of what line inside Dispatcher constructor is throwing the error

    [SecurityCritical, SecurityTreatAsSafe]
    private Dispatcher()
    {
        _queue = new PriorityQueue<DispatcherOperation>();

        _tlsDispatcher = this; // use TLS for ownership only
        _dispatcherThread = Thread.CurrentThread;

        // Add ourselves to the map of dispatchers to threads.
        lock(_globalLock)
        {
            _dispatchers.Add(new WeakReference(this));
        }

        _unhandledExceptionEventArgs = new DispatcherUnhandledExceptionEventArgs(this);
        _exceptionFilterEventArgs = new DispatcherUnhandledExceptionFilterEventArgs(this);

        _defaultDispatcherSynchronizationContext = new DispatcherSynchronizationContext(this);

        // Create the message-only window we use to receive messages
        // that tell us to process the queue.
        MessageOnlyHwndWrapper window = new MessageOnlyHwndWrapper();
        _window = new SecurityCriticalData<MessageOnlyHwndWrapper>( window );

        _hook = new HwndWrapperHook(WndProcHook);
        _window.Value.AddHook(_hook);

        // DDVSO:447590
        // Verify that the accessibility switches are set prior to any major UI code running.
        AccessibilitySwitches.VerifySwitches(this);
    }

Update

Updated the constructor code from .net open source. The dispatcher.cs is available here https://referencesource.microsoft.com/#WindowsBase/Base/System/Windows/Threading/Dispatcher.cs,078d6b27d9837a35

After some investigation we found that issue occurs after some 15000 iterations(each iteration creates a new thread and calls PngBitmapEncoder). Then found that this is linked to Global Atom Table limit (0x4000 or 16384). More details on Global Atom Table here https://docs.microsoft.com/en-us/archive/blogs/ntdebugging/identifying-global-atom-table-leaks

The dispatcher created each time makes an entry in the Global atom table and on thread exit this entry is not cleared. This leads to leak in Global atom table and when it reaches the max limit, it throws "Not enough storage…." error. This seems like a issue with Microsoft’s handling of Dispatcher. Even the PngBitmapEncoder documentation, I do not see any remark with respect to Dispatcher handling and any explicit shutdown of dispatcher.


Get this bounty!!!

#StackBounty: #c# #.net #web-crawler #sql-server-express #pc What causes the slow down in crawling from a laptop program?

Bounty: 50

I have a project that I have managed to save from a server that was outsourced and managed to get most of it working on one of the laptops I have at home. It has OS Win 8.1r, VS 2017, SQL Server Express 2017, and I wrote the DLL I use in my app in C# .NET version 4.6.1.

I am currently every night at midnight manually running some stored procs that fill some stat tables due to no MS Agent existing in SQL Server Express, then runs an index maintenance proc that either REBUILDS or DEFRAGS the indexes plus rebuilds stats before I manually restart the BOT from a command prompt at just after midnight.

However I have noticed if I leave the laptop on for 3-5 days the time each run takes to get (on average), 40 races and 5-20 runners per race going through proxies gets slower and slower. I just rebooted now as last night it took from 1am to 11am to crawl, scan with regex to get info and save to the DB races and runners.

However if I look at the CreateDate times I store on every new race I can see a pattern..

Yesterday took 10 hours to do 40 races and runners,
Saturday took 4 hours to do 50 races and runners
Friday 3 hours 49 races
Thursday 5 hours 42 races
Wednesday 5 hours 32 races
Tuesday 1 hour 36 races

Obviously over time more and more races & runners are stored in the DB, so retrieval times from indexes, storage etc gets longer but after a reboot it is quick Harry, I just restarted it tonight, rebuild the indexes then let it go and its already done 7 races in 7 minutes.

Obviously I haven’t got a server to put this on, the last attempt resulted in an old boss putting it on a French server that doesn’t allow access to online betting sites and my BOT uses the Betfair API.

It runs on my laptop ok apart from
-the speed of getting all races and runners into the DB lengthens over time. The longer I leave it on the longer it takes, despite all the clean up operations I do nightly (Delete old log messages, locks, and rebuild stat tables before a reindex/defrag job).

-For some reason the logfile I output debug messages to for after the fact debugging e.g I look for SQL Errors, Connection Errors, Proxy Issues, RegEx errors and I output this through the console app I am using the DLL with at the moment to a logfile in C:programdatamyprojlogfile.txt – it has permissions as it writes to the file however once the job is over if I try and open it up in my standard Editor – Editplus, it just opens up a blank document. If I open it in Notepad first I can see all the debug then I can copy n paste it to a blank Editplus document.

It never did this before on my Work PC, Permissions are okay, the file is being written to and I don’t get any "permission denied" or other I/O errors when opening the logfile up, it’s just empty if I don’t open it in Notepad.

So I’d like to know what sort of actions are happening to slow this job down over time that a reboot fixes. I know the old saying we used to get from our techies when we had a bug or issue with our PCs at work "have you tried turning it on and off again" – which does for some reason fix so many issues.

I’d just like to know what sort of issues could be happening to slow it down over days that I could maybe automate a clean up job so it doesn’t happen. I used to run the exact same code on my Work PC connected remotely to a server every day for months before forced to do reboots due to Windows Updates. So it never used to do it with my bad practice at work of leaving my PC on all the time.

Is the disk getting fragmented – and why wouldn’t it require a disk defrag to solve it after rebooting.
The registry? What could get worse and worse over time that a reboot fixes.
Or is it the fact I am using MS SQL Express 2017 and there is some I/O issue with the files it writes to that slows down over time.

I would just like to be able to leave my laptop on with this BOT running at specific times during the day and not worry about it taking 11 hours to complete the first import job.

It is now 37 mins past, been running for 20 mins and it has imported 15 races and runners, about a quarter of the total, so should be finished in about an hours time tonight, and I have JUST re-started my laptop, nothing else, and it has speeded it up from 10 hours yesterday night?

What could be slowing it down lover time, and can I fix it at all?


Get this bounty!!!

#StackBounty: #c# #.net #http #rest In-consistence behavior of C# code

Bounty: 50

I am creating Windows service. That is working sometimes when I start it first time. After few executions my code throwing exceptions like

Server unavailable 503

So many requests 429

Looks like my code is correct but few cases it is not working. Could you please check it once.

 private void timer1_Tick(object sender, ElapsedEventArgs e)
 {
    string jsonString = "";
    string jsonstring2 = "";
    string prodfetchurl = HOST;
    string prodfetchurl1 = "testURL";

    var req =
        WebRequest.Create(prodfetchurl) as HttpWebRequest;

    req.Method = "GET";
    req.KeepAlive = true;
    InitializeRequest(req);
    //req.Proxy = null;
    req.Accept = MIME_TYPE;
    System.Threading.Thread.Sleep(200000);
    var response = (HttpWebResponse)req.GetResponse();
    WriteToFile("First service called...");
    if (response.StatusCode == HttpStatusCode.OK)
    {
        Stream responseStream = response.GetResponseStream();
        StreamReader responseReader = new StreamReader(responseStream);
        jsonString = responseReader.ReadToEnd();
    }

    var deserialsseobj = JsonConvert.DeserializeObject<ProductList>(jsonString).Products.Where(i => i.Failed > 0).ToList();
    foreach (var a in deserialsseobj)
    {
        var pid = a.ID;
        string url = FailedDevicesUrl + pid.Value + "/failed";
        var req2 = WebRequest.Create(url) as HttpWebRequest;
        req2.Method = "GET";
        req2.KeepAlive = true;
        InitializeRequest(req2);
        //req2.Proxy = null;
        req2.Timeout = 300000;
        req2.Accept = MIME_TYPE;     
        System.Threading.Thread.Sleep(200000);

        var response1 = (HttpWebResponse)req2.GetResponse();
        Stream responsestream2 = response1.GetResponseStream();
        WriteToFile("Second service called...");
        if (response1.StatusCode == HttpStatusCode.OK)
        {
            StreamReader responsereader1 = new StreamReader(responsestream2);
            jsonstring2 = responsereader1.ReadToEnd();
        }
        var output = JsonConvert.DeserializeObject<List<FailedDeviceList>>(jsonstring2);  // Will get List of the Failed devices
        AutoReprocess(pid.Value, output);
        List<int> deviceids = new List<int>();
        Reprocessdata reproc = new Reprocessdata();
        Reprocessdata.DeviceId rprod = new Reprocessdata.DeviceId();

        reproc.ForceFlag = true;
        reproc.ProductID = pid.Value;
        foreach (var dd in output)
        {
            rprod.ID = dd.DeviceId;
            reproc.DeviceIds.Add(rprod);
        }
        // Reprocess the Product in Devices
        var req3 = WebRequest.Create(ReprocessUrl) as HttpWebRequest;
        req3.Method = "POST";
        InitializeRequest(req3);
        req3.Accept = MIME_TYPE;
        req3.ContentType = "application/json";
        using (StreamWriter writer = new StreamWriter(req3.GetRequestStream()))
        {
            string json = new JavaScriptSerializer().Serialize(reproc);

            writer.Write(json);
            writer.Close();
        }
        var response5 = (HttpWebResponse)req3.GetResponse();
        WriteToFile("Third service called...");
        if (response5.StatusCode == HttpStatusCode.OK)
        {
            string result;
            using (StreamReader rdr = new StreamReader(response5.GetResponseStream()))
            {
                result = rdr.ReadToEnd();
            }
        }
    }
    response.Close();
 }

Methods used in above code

public void AutoReprocess(int pid, List<FailedDeviceList> output)
{
    List<int> deviceids = new List<int>();
    Reprocessdata reproc = new Reprocessdata();
    Reprocessdata.DeviceId rprod = new Reprocessdata.DeviceId();
    reproc.ForceFlag = true;
    reproc.ProductID = pid;
    foreach (var dd in output)
    {
        rprod.ID = dd.DeviceId;
        reproc.DeviceIds.Add(rprod);
    }
    var req3 = WebRequest.Create(ReprocessUrl) as HttpWebRequest;
    req3.Method = "POST";
    req3.KeepAlive = true;
    InitializeRequest(req3);
    req3.Accept = MIME_TYPE;
    req3.Timeout = 300000;
    req3.ContentType = "application/json";
    using (StreamWriter writer = new StreamWriter(req3.GetRequestStream()))
    {
        string json = new JavaScriptSerializer().Serialize(reproc);

        writer.Write(json);
        writer.Close();
    }
    System.Threading.Thread.Sleep(100000);
    var response5 = (HttpWebResponse)req3.GetResponse();
    WriteToFile("Third service called...");
    if (response5.StatusCode == HttpStatusCode.OK)
    {
        string result;
        using (StreamReader rdr = new StreamReader(response5.GetResponseStream()))
        {
            result = rdr.ReadToEnd();
        }
    }
}

public void InitializeRequest(HttpWebRequest request)
{
    request.Headers.Add("aw-tenant-code", API_TENANT_CODE);
    request.Credentials = new NetworkCredential(USER_NAME, PASSWORD);
    request.KeepAlive = true;
    //request.AddRange(1024);
    //request.Proxy = null; 
}


Get this bounty!!!

#StackBounty: #c# #.net #rest #http #windows-services Https calls are not connecting to server

Bounty: 200

I am working on Windows Service in visual studio 2017. In the rest api’s call, getting exceptions while debugging code. Sometimes first 2 3 calls working after that getting exceptions.

System.Net.WebException: ‘The remote server returned an error: (503)
Server Unavailable.’

The remote server returned an error: (429)

Unable to connect to the remote server

When calling same api’s from Postman, getting response successfully.

This is my code

private void timer1_Tick(object sender, ElapsedEventArgs e)
{
    WriteToFile("timer1_Tick method called..");
try
{
    string jsonString = "";
    string jsonstring2 = "";
    string prodfetchurl = HOST;
    var req = WebRequest.Create(prodfetchurl) as HttpWebRequest;
    req.Method = "GET";
    InitializeRequest(req);
    req.Accept = MIME_TYPE;
    //System.Threading.Thread.Sleep(5000);
    var response = (HttpWebResponse)req.GetResponse();
    WriteToFile("First service called...");
    if (response.StatusCode == HttpStatusCode.OK)
    {
        Stream responseStream = response.GetResponseStream();
        StreamReader responseReader = new StreamReader(responseStream);
        jsonString = responseReader.ReadToEnd();
    }
    var deserialsseobj = JsonConvert.DeserializeObject<ProductList>(jsonString).Products.Where(i => i.Failed > 0).ToList();
    foreach (var a in deserialsseobj)
    {
        var pid = a.ID;
        string url = FailedDevicesUrl + pid.Value + "/failed";
        var req2 = WebRequest.Create(url) as HttpWebRequest;
        req2.Method = "GET";
        InitializeRequest(req2);

        req2.Timeout = 300000;
        req2.Accept = MIME_TYPE;
        var response1 = (HttpWebResponse)req2.GetResponse();
        Stream responsestream2 = response1.GetResponseStream();
        WriteToFile("Second service called...");
        if (response1.StatusCode == HttpStatusCode.OK)
        {
            StreamReader responsereader1 = new StreamReader(responsestream2);
            jsonstring2 = responsereader1.ReadToEnd();
        }

        var output = JsonConvert.DeserializeObject<List<FailedDeviceList>>(jsonstring2);  // Will get List of the Failed devices
        List<int> deviceids = new List<int>();
        Reprocessdata reproc = new Reprocessdata();
        Reprocessdata.DeviceId rprod = new Reprocessdata.DeviceId();

        reproc.ForceFlag = true;
        reproc.ProductID = pid.Value;
        foreach (var dd in output)
        {
            rprod.ID = dd.DeviceId;
            reproc.DeviceIds.Add(rprod);
        }

        // Reprocess the Product in Devices
        var req3 = WebRequest.Create(ReprocessUrl) as HttpWebRequest;
        req3.Method = "POST";
        InitializeRequest(req3);
        req3.Accept = MIME_TYPE;
        req3.Timeout = 300000;
        req3.ContentType = "application/json";
        using (StreamWriter writer = new StreamWriter(req3.GetRequestStream()))
        {
            string json = new JavaScriptSerializer().Serialize(reproc);

            writer.Write(json);
            writer.Close();
        }
        System.Threading.Thread.Sleep(5000);
        var response5 = (HttpWebResponse)req3.GetResponse();
        WriteToFile("Third service called...");
        if (response5.StatusCode == HttpStatusCode.OK)
        {
            string result;
            using (StreamReader rdr = new StreamReader(response5.GetResponseStream()))
            {
                result = rdr.ReadToEnd();
            }
        }
    }
    response.Close();
}
catch (Exception ex)
{
    WriteToFile("Simple Service Error on: {0} " + ex.Message + ex.StackTrace);
}
}

Methods used in above code

protected override void OnStart(string[] args)
{
    base.OnStart(args);
    timer1 = new System.Timers.Timer();
    timer1.Interval = 60000; //every 1 min
    timer1.Elapsed += new System.Timers.ElapsedEventHandler(timer1_Tick);
    timer1.Enabled = true;
    WriteToFile("Service has started..");
}

public void InitializeRequest(HttpWebRequest request)
{
    request.Headers.Add("aw-tenant-code", API_TENANT_CODE);
    request.Credentials = new NetworkCredential(USER_NAME, PASSWORD);
    request.KeepAlive = false;
    request.AddRange(1024);
}

When I contacted service provide they said everything fine from there side. Is this my code is buggy or windows service not reliable? How can I fix this issue?

Note: All APIS are working fine from Angular application using Visual Studio Code. It means my code is not working.


Get this bounty!!!