#StackBounty: #php #laravel #backpack-for-laravel addClause always returns empty collection

Bounty: 500

I have an Article model and am using $this->crud->addClause('where','user_id',backpack_user()->id); to limit the articles shown to those belonging to the currently logged in user.

I now want to further filter my articles by only showing articles that have a certain page_id – in fact, I can do this without any hassle:

$this->crud->addClause('where','page_id', "=", "154");

Now only articles that have a page_id of 154 are shown.

Obviously I want a dynamic variable for page_id, so my url looks like this

http://example.com/admin/articles/154

and I extract the last parameter like this:

$segments = request()->segments();
$page_id = $segments[count($segments)-1]; //get last segment, contains "154"

Now my problem is that for some obscure reason, whenever I try to use

$this->crud->addClause('where','page_id', "=", $page_id);

The collection is empty, even tho dd()’ing $page_id shows it is set and contains “154”. I have even tried to cast $page_id to int like this and in fact it becomes an int but still the same problem (empty collection):

$page_id = (int)$segments[count($segments)-1]; //shows 154


Edit: Here are the outputs of dd($this->crud->query->toSql());:

//limit access to user owned models
$this->crud->addClause('where','user_id',backpack_user()->id);    
//produces "select * from `articles` where `user_id` = ?"
dd($this->crud->query->toSql());

//limit entries to those with samge page_id as in URL
$page_id = Route::current()->parameter('page_id');
$this->crud->addClause('where','page_id',$page_id);       
//produces "select * from `articles` where `user_id` = ? and `page_id` = ?"
dd($this->crud->query->toSql());

Thanks for any help!


Get this bounty!!!

#StackBounty: #javascript #ios #ajax #laravel #axios Axios POST fails on iOS

Bounty: 50

I am trying to do a simple ajax POST from domain1 to domain2 using Axios.
This is a cross domain simple POST so there is no PREFLIGHT (OPTIONS) call.
The response from the application is a simple JSON string.

On Chrome, on Android, Windows and iOS (excluding iPhone) this works fine.
But on iPhone 6,7,8+ on both Safari and Chrome i get an error in the console from the axios response.I can see the POST request get to the application on domain2 and a json response is sent. But this is what is shown when i console.log the response in the axios.catch. There are no other details.

Error: Network Error

My POST is a multipart/form-data post with the following Request headers:

Accept: application/json, text/plain, */*
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary81kouhSK7WgyVQZ3
Origin: https://domain1
Referer: https://domain1/test
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1

And the form data is simply 4 text fields

------WebKitFormBoundary81kouhSK7WgyVQZ3
Content-Disposition: form-data; name="a"
12345
------WebKitFormBoundary81kouhSK7WgyVQZ3
Content-Disposition: form-data; name="b"
asdfasf
------WebKitFormBoundary81kouhSK7WgyVQZ3
Content-Disposition: form-data; name="c"
asdfadsf
------WebKitFormBoundary81kouhSK7WgyVQZ3
Content-Disposition: form-data; name="d"
adfasdfa
------WebKitFormBoundary81kouhSK7WgyVQZ3--

When the POST is sent from Chrome, (or IE and Firefox) on Windows and Mac I get the following response headers and a HTTP 200:

access-control-allow-headers: Accept,Content-Type,Origin,Referer,User-Agent
access-control-allow-methods: GET, POST, PUT, DELETE, OPTIONS
access-control-allow-origin: *
cache-control: no-cache, private
content-type: application/json, text/plain, */*; charset=UTF-8
x-content-type-options: nosniff
x-ratelimit-limit: 60
x-ratelimit-remaining: 59
x-xss-protection: 1

which i have explicitly set on the application of domain2 (Laravel 5.8 application – CORS headers set in middleware).

But on iPhone, both Safari and Chrome (and on Safari browser on a Mac – Chrome works on Mac) I do not see any response – the conole.log(error) shows (see axios code below)

Error: Network Error

And in the network tab looking at the request/response there are no response headers returned and no HTTP status code. Only the request headers are shown.

My axios code is the following:

axios.post('https://domain2/test', formData)           
    .then(function (response) {

        console.log("POST function of axios 1");
        console.log(response);
        console.log("POST function of axios 2");
    })
    .catch(function (error) {
        console.log("Error in catch of axios post");
        console.log(error);
        console.log("End error");
    });

The formData is created using formData.append(‘a’,12345) etc…

I can successfully POST to a test upload from https://domain1 to https://domain1 using the same axios code, so i believe there are some issues with the response headers from domain2 that iOS does not like and kills the response.

I’ve tried setting/changing all response headers, setting headers on the Axios POST, tried using simple xhr instead of Axios etc but to no avail…same error.

Anyone any pointers? I;ve googled etc… but have not found anything that helps.
Even how i could get more information from the Error response on iPhone?
I am debugging the iPhone on a Mac so can see the console.log etc…

Many thanks


Get this bounty!!!

#StackBounty: #php #laravel #selenium #behat #mink Use Laravel Eloquent from within Behat/Mink FeatureContext

Bounty: 50

This question assumes some knowledge of Laravel, Behat, and Mink.

With that in mind, I am having trouble making a simple call to the DB from within my Behat FeatureContext file which looks somewhat like this…

<?php

use AppModelsDbUser;
use BehatBehatContextContext;
use BehatGherkinNodePyStringNode;
use BehatGherkinNodeTableNode;
use BehatMinkExtensionContextMinkContext;

/**
 * Defines application features from the specific context.
 */
class FeatureContext extends MinkContext implements Context {
    public function __construct() {}

    /**
     * @Given I am authenticated with :email and :password
     */
    public function iAmAuthenticatedWith($email, $password) {
        User::where('email', $email)->firstOrFail();

        $this->visitPath('/login');
        $this->fillField('email', $email);
        $this->fillField('password', $password);
        $this->pressButton('Login');
    }
}

When this scenario runs I get this error…

Fatal error: Call to a member function connection() on null (BehatTestworkCallExceptionFatalThrowableError)

Which is caused by this line…

User::where('email', $email)->firstOrFail();

How do I use Laravel Eloquent (make DB calls) from within a Behat/Mink FeatureContext? Do I need to expose something within the constructor of my FeatureContext? Update/add a line within composer.json or behat.yml file?

If there is more than one way to solve this problem and it is worth mentioning, please do.

Additional Details

  • Laravel: 5.5.*

  • Behat: ^3.3

  • Mink Extension: ^2.2

  • Mink Selenium 2 Driver: ^1.3

Behat Config

default:
  extensions:
    BehatMinkExtensionServiceContainerMinkExtension:
      base_url: "" #omitted 
      default_session: selenium2
      selenium2:
        browser: chrome


Get this bounty!!!

#StackBounty: #laravel #homebrew #dnsmasq #laravel-valet #php-7.3 Valet installed for laravel, but why isn't dnsmasq resolving corr…

Bounty: 200

I’ve installed valet for laravel using homebrew on my mac (Mojave). According to laravel’s documentation I should now be able to ping *.test, but when I keep getting the following error:

ping:cannot resolve foobar.test: Unknown host

It looks like an issue with dnsmasq. I’ve followed all the suggestions here, but nothing seems to help.

# Content of '/Users/<username>/.config/valet/dnsmasq.conf'

address=/.test/127.0.0.1
listen-address=127.0.0.1

I can see that the resolver for .test seems to be set up ok. Below is the output from scutil --dns

DNS configuration

resolver #1
  search domain[0] : default
  nameserver[0] : 192.168.1.1
  if_index : 6 (en0)
  flags    : Request A records
  reach    : 0x00020002 (Reachable,Directly Reachable Address)

resolver #2
  domain   : local
  options  : mdns
  timeout  : 5
  flags    : Request A records
  reach    : 0x00000000 (Not Reachable)
  order    : 300000

resolver #3
  domain   : 254.169.in-addr.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records
  reach    : 0x00000000 (Not Reachable)
  order    : 300200

resolver #4
  domain   : 8.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records
  reach    : 0x00000000 (Not Reachable)
  order    : 300400

resolver #5
  domain   : 9.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records
  reach    : 0x00000000 (Not Reachable)
  order    : 300600

resolver #6
  domain   : a.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records
  reach    : 0x00000000 (Not Reachable)
  order    : 300800

resolver #7
  domain   : b.e.f.ip6.arpa
  options  : mdns
  timeout  : 5
  flags    : Request A records
  reach    : 0x00000000 (Not Reachable)
  order    : 301000

resolver #8
  domain   : test
  nameserver[0] : 127.0.0.1
  flags    : Request A records, Request AAAA records
  reach    : 0x00030002 (Reachable,Local Address,Directly Reachable Address)

DNS configuration (for scoped queries)

resolver #1
  search domain[0] : default
  nameserver[0] : 192.168.1.1
  if_index : 6 (en0)
  flags    : Scoped, Request A records
  reach    : 0x00020002 (Reachable,Directly Reachable Address)

I can also see that dnsmasq seems to be running ok. Here’s the output from brew services list:

dnsmasq started root /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
httpd   started root /Library/LaunchDaemons/homebrew.mxcl.httpd.plist
mysql   started root /Library/LaunchDaemons/homebrew.mxcl.mysql.plist
nginx   started root /Library/LaunchDaemons/homebrew.mxcl.nginx.plist
php     started root /Library/LaunchDaemons/homebrew.mxcl.php.plist
php@7.1 started root /Library/LaunchDaemons/homebrew.mxcl.php@7.1.plist
php@7.2 started root /Library/LaunchDaemons/homebrew.mxcl.php@7.2.plist

Other things I’ve tried:

  • Disabling my firewall in case that was blocking the request.
  • Restarting dnsmasq (multiple times) using: sudo brew services
    restart dnsmasq
  • Reinstalling valet using valet install
  • Checking that there are no conflicting paths in /etc/hosts

Anyone got any other suggestions?

EDIT: Output of sudo brew services restart --verbose dnsmasq

`Stopping `dnsmasq`... (might take a while)
==> Successfully stopped `dnsmasq` (label: homebrew.mxcl.dnsmasq)
==> Generated plist for dnsmasq:
   <?xml version="1.0" encoding="UTF-8"?>
   <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
   <plist version="1.0">
     <dict>
       <key>Label</key>
       <string>homebrew.mxcl.dnsmasq</string>
       <key>ProgramArguments</key>
       <array>
         <string>/usr/local/opt/dnsmasq/sbin/dnsmasq</string>
         <string>--keep-in-foreground</string>
         <string>-C</string>
         <string>/usr/local/etc/dnsmasq.conf</string>
       </array>
       <key>RunAtLoad</key>
       <true/>
       <key>KeepAlive</key>
       <true/>
     </dict>
   </plist>


/bin/launchctl enable system/homebrew.mxcl.dnsmasq
/bin/launchctl bootstrap system /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
==> Successfully started `dnsmasq` (label: homebrew.mxcl.dnsmasq)

EDIT 2:

I think I’m getting somewhere now. I checked in console.app for dnsmasq and I saw the error message:

failed to open pidfile /usr/local/var/run/dnsmasq/dnsmasq.pid: No such file or directory

…which led me here. It turns out I was missing the dnsmasq folder in cd /usr/local/var/run/ so I ran sudo mkdir dnsmasq and now the ping actually returns the following response:

PING foobar.test (127.0.0.1): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
...

I’m not sure what this means or whether it is now working.

When I go to foobar.test in my browser I get the message This site can’t be reached even though I have created a project with that name and linked it using valet link foobar.

EDIT 3:

I’ve got ping working properly now by turning off stealth mode (as described here) but I still get This site can’t be reached when I navigate to foobar.test in my browser.

I get the following error when I run curl foobar.test --verbose

* Rebuilt URL to: foobar.test/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to foobar.test (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: foobar.test
> User-Agent: curl/7.54.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

EDIT 4

Output of cat /usr/local/etc/dnsmasq.conf | grep -i interface:

# 10.1.2.3 to 192.168.1.1 port 55 (there must be an interface with that
# specified interfaces (and the loopback) give the name of the
# interface (eg eth0) here.
# Repeat the line for more than one interface.
#interface=
# Or you can specify which interface _not_ to listen on
#except-interface=
# If you want dnsmasq to provide only DNS service on an interface,
#no-dhcp-interface=
# even when it is listening on only some interfaces. It then discards
# working even when interfaces come and go and change address. If you
# want dnsmasq to really bind only the interfaces it is listening on,
#bind-interfaces
# that these two Ethernet interfaces will never be in use at the same
# Always give the InfiniBand interface with hardware address


Get this bounty!!!

#StackBounty: #laravel #api #plugins #moodle Moodle autologin plugin – how to direct user to a specific course?

Bounty: 50

I am building a laravel web application which involves the usage of Moodle Service (version 3.6). I have done autologin with a plugin.

The problem is that clicking the Take Course button on my external application will autologin to Moodle (via the plugin), but does not redirect the user to the course described in the button.

Is there a mechanism to do this?


Get this bounty!!!

#StackBounty: #php #laravel #amazon-s3 #php-curl Laravel backup error uploading large backup to s3

Bounty: 50

I have a Laravel project that creates a new backup daily using spatie/laravel-backup and uploads it to s3. It is properly configured, and it has been working for over a year without a problem.

Suddenly, the backup can’t complete the upload process because of the following error:

Copying zip failed because: An exception occurred while uploading parts to a multipart upload. The following parts had errors:
- Part 17: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=17&uploadId=uploadId"; AWS HTTP error: cURL error 55: SSL_write() returned SYSCALL, errno = 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)  (server): 100 Continue -
- Part 16: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId"; AWS HTTP error: Client error: `PUT https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code><Message>Your socket connection to the server w (truncated...)
 RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code>
<Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message>
<RequestId>RequestId..</RequestId>
<HostId>Host id..</HostId>

I tried running:

php artisan backup:run --only-db // 110MB zip file
php artisan backup:run --only-files // 34MB zip file

And they both work properly. My guess is that the error is caused by the full zip size (around 145MB), which would explain why it never occurred before (when the backup size was smaller). The laravel-backup package has a related issue, but I don’t think it is a problem of the library, which just uses the underlying s3 flysystem interface to upload the zip.

Is there some param I should set on php.ini (e.g. to increase the curl upload file size), or a system to separate the file on multiple chunks?


Get this bounty!!!

#StackBounty: #php #laravel #eloquent PHP Laravel – Improving and refactoring code to Reduce Queries

Bounty: 50

Improve Request to Reduce Queries

I have a web application, where users can upload Documents or Emails, to what I call a Strema. The users can then define document fields email fields to the stream, that each document/email will inherit. The users can then furthermore apply parsing rules to these fields, that each document/email will be parsed after.

Now let’s take the example, that an user uploads a new document. (I have hardcoded the ID’s for simplicty).

$stream = Stream::find(1);
$document = Document::find(2);

$parsing = new ApplyParsingRules;
$document->storeContent($parsing->parse($stream, $document));

Below is the function that parses the document according to the parsing rules:

    public function parse(Stream $stream, DataTypeInterface $data) : array
    {
        //Get the rules.
        $rules = $data->rules();

        $result = [];
        foreach ($rules as $rule) {

            $result[] = [
                'field_rule_id' => $rule->id,
                'content' => 'something something',
                'typeable_id' => $data->id,
            ];
        }

        return $result;
    }

So above basically just returns an array of the parsed text.

Now as you probably can see, I use an interface $DataTypeInterface. This is because the parse function can accept both Documents and Emails.

To get the rules, I use this code:

//Get the rules.
$rules = $data->rules();

The method looks like this:

class Document extends Model implements DataTypeInterface
{
    public function stream()
    {
        return $this->belongsTo(Stream::class);
    }
    public function rules() : object
    {
        return FieldRule::where([
            ['stream_id', '=', $this->stream->id],
            ['fieldable_type', '=', 'AppDocumentField'],
        ])->get();
    }
}

This will query the database, for all the rules that is associated with Document Fields and the fields, that is associated with the specific Stream.

Last, in my first request, I had this:

$document->storeContent($parsing->parse($stream, $document));

The storeContent method looks like this:

class Document extends Model implements DataTypeInterface
{
    // A document will have many field rule results.
    public function results()
    {
        return $this->morphMany(FieldRuleResult::class, 'typeable');
    }
    // Persist the parsed content to the database.
    public function storeContent(array $parsed) : object
    {
        foreach ($parsed as $parse) {
            $this->results()->updateOrCreate(
                [
                    'field_rule_id' => $parse['field_rule_id'],
                    'typeable_id' => $parse['typeable_id'],
                ],
                $parse
            );
        }
        return $this;
    }
}

As you can probably imagine, everytime a document gets parsed, it will create be parsed by some specific rules. These rules will all generate a result, thus I am saving each result in the database, using the storeContent method.

However, this will also generate a query for each result.

One thing to note: I am using the updateOrCreate method to store the field results, because I only want to persist new results to the database. All results where the content was just updated, I want to update the existing row in the database.

For reference, above request generates below 8 queries:

select * from `streams` where `streams`.`id` = ? limit 1
select * from `documents` where `documents`.`id` = ? limit 1
select * from `streams` where `streams`.`id` = ? limit 1    
select * from `field_rules` where (`stream_id` = ? and `fieldable_type` = ?)
select * from `field_rule_results` where `field_rule_results`.`typeable_id` = ? and...
select * from `field_rule_results` where `field_rule_results`.`typeable_id` = ? and...  
insert into `field_rule_results` (`field_rule_id`, `typeable_id`, `typeable_type`, `content`, `updated_at`, `created_at`) values (..)
insert into `field_rule_results` (`field_rule_id`, `typeable_id`, `typeable_type`, `content`, `updated_at`, `created_at`) values (..)

Above works fine – but seems a bit heavy, and I can imagine once my users starts to generate a lot of rules/results, this will be a problem.

Is there any way that I can optimize/refactor above setup?


Get this bounty!!!

#StackBounty: #php #laravel #eloquent Updating one-to-many relationships with updateOrCreate methods

Bounty: 50

My main model Tag has a one-to-many relationship with Product where one Tag can have many Products assigned to them via tag_id relationship on the DB.

On my edit view, I am allowing users to edit the tag products. These products can be added/edited/deleted on the form request.

Each product field on the form is picked up from a request() array. E.g: request('title'),request('price').

I have set $title[$key] as the request('title') array for example.

My thoughts next, was to loop through each of the products for this tag and updateOrCreate based on the request data. The issue here, is that there’s no way of detecting if that particular product was indeed needing updated.

TagController – Update Product Model (One-to-Many)

foreach($tag->products as $key => $product){

  Product::updateOrCreate([
   'id'  => $product->id,
   ],
     [
       'title' => $title[$key],
       'price' => $price[$key],
       'detail' => $detail[$key],
       'order' => $order[$key],
       'tagX' => $tagX[$key],
       'tagY' => $tagY[$key],
       'thumb' => $img[$key],
   ]);
}

For the initial tag update, I have set an if statement which works great (albeit messy) for the main tag img.

TagController – Update Tag Model

//Update the tag collection
if($request->hasFile('img')) {
  $tag->update([
    'name' => $name,
    'hashtag' => $hashtag,
    'img' => $imgPub,
  ]);
} else{
  $tag->update([
    'name' => $name,
    'hashtag' => $hashtag,
  ]);
}

Is there a better way to determine if the product fields were updated on the request?

Ideally I would like the user to be able to add/remove or edit products from the tag edit request, but not delete existing product images if they have not been updated. Thanks!


Get this bounty!!!