#StackBounty: #validation #kotlin #websocket #micronaut Micronaut websocket message validation

Bounty: 150

I am trying to implement validation for the Message in a websocket OnMessage request, which normally is a string as seen below.

    @OnMessage
    open fun onMessage(
        gameId: String,
        username: String,
        message: String,
        session: WebSocketSession?
    ): Publisher<String> {
        val msg = "[$username] $message"
        return broadcaster.broadcast(msg, isValid(gameId))
    }

I am using Micronaut’s Bean Validation to validate the message as Message object as seen below.

package com.andreasjj.websocket

import com.fasterxml.jackson.databind.exc.MismatchedInputException
import io.micronaut.http.annotation.Error
import io.micronaut.http.codec.CodecException
import io.micronaut.websocket.WebSocketBroadcaster
import io.micronaut.websocket.WebSocketSession
import io.micronaut.websocket.annotation.*
import org.reactivestreams.Publisher
import java.util.function.Predicate
import javax.validation.Valid

@ServerWebSocket("/ws/game/{gameId}/{username}")
open class GameWebsocket(private val broadcaster: WebSocketBroadcaster) {
    @OnOpen
    fun onOpen(gameId: String, username: String, session: WebSocketSession?): Publisher<String> {
        val msg = "Hello [$username]"
        return broadcaster.broadcast(msg, isValid(gameId))
    }

    @OnMessage
    open fun onMessage(
        gameId: String,
        username: String,
        @Valid message: GameClientMessage,
        session: WebSocketSession?
    ): Publisher<String> {
        println(message)
        val msg = "[$username] $message"
        return broadcaster.broadcast(msg, isValid(gameId))
    }

    @OnClose
    fun onClose(
        gameId: String,
        username: String,
        session: WebSocketSession?
    ): Publisher<String> {
        val msg = "Bye [$username]"
        return broadcaster.broadcast(msg, isValid(gameId))
    }

    private fun isValid(gameId: String): Predicate<WebSocketSession> {
        return Predicate { s: WebSocketSession ->
            gameId.equals(
                s.uriVariables.get(
                    "gameId",
                    String::class.java, null
                ), ignoreCase = true
            )
        }
    }
}
@Introspected
data class GameClientMessage(
    @field:NotNull var type: GameClientMessageType,
    @field:NotBlank var text: String
)

enum class GameClientMessageType {
    STARTGAME,
    ENDGAME,
    SKIPROUND,
    NEXTROUND,
    ANSWER
}

This sort of works, but when the message doesn’t follow the validation requirements it fails with an ugly error and closes the websocket connection

backend_1            | 13:59:47.241 [default-nioEventLoopGroup-1-4] ERROR i.m.h.s.n.w.NettyServerWebSocketHandler - Error Processing WebSocket Message [io.micronaut.websocket.context.DefaultWebSocketBeanRegistry$DefaultWebSocketBean@13e15b55]: Error decoding stream for type [class com.andreasjj.websocket.GameClientMessage]: Missing required creator property 'type' (index 0)
backend_1            |  at [Source: (byte[])"{"action":"Message","text":"Hello"}"; line: 1, column: 35]
backend_1            | io.micronaut.http.codec.CodecException: Error decoding stream for type [class com.andreasjj.websocket.GameClientMessage]: Missing required creator property 'type' (index 0)
backend_1            |  at [Source: (byte[])"{"action":"Message","text":"Hello"}"; line: 1, column: 35]
backend_1            |  at io.micronaut.jackson.codec.JacksonMediaTypeCodec.decode(JacksonMediaTypeCodec.java:178)
backend_1            |  at io.micronaut.http.netty.websocket.AbstractNettyWebSocketHandler.lambda$handleWebSocketFrame$4(AbstractNettyWebSocketHandler.java:331)
backend_1            |  at java.base/java.util.Optional.map(Optional.java:265)
backend_1            |  at io.micronaut.http.netty.websocket.AbstractNettyWebSocketHandler.handleWebSocketFrame(AbstractNettyWebSocketHandler.java:331)
backend_1            |  at io.micronaut.http.netty.websocket.AbstractNettyWebSocketHandler.channelRead0(AbstractNettyWebSocketHandler.java:294)
backend_1            |  at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
backend_1            |  at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
backend_1            |  at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
backend_1            |  at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
backend_1            |  at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
backend_1            |  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
backend_1            |  at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
backend_1            |  at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
backend_1            |  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
backend_1            |  at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
backend_1            |  at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
backend_1            |  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
backend_1            |  at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
backend_1            |  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
backend_1            |  at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
backend_1            |  at java.base/java.lang.Thread.run(Thread.java:834)
backend_1            | Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Missing required creator property 'type' (index 0)
backend_1            |  at [Source: (byte[])"{"action":"Message","text":"Hello"}"; line: 1, column: 35]
backend_1            |  at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
backend_1            |  at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1615)
backend_1            |  at com.fasterxml.jackson.databind.deser.impl.PropertyValueBuffer._findMissing(PropertyValueBuffer.java:194)
backend_1            |  at com.fasterxml.jackson.databind.deser.impl.PropertyValueBuffer.getParameters(PropertyValueBuffer.java:160)
backend_1            |  at com.fasterxml.jackson.databind.deser.ValueInstantiator.createFromObjectWith(ValueInstantiator.java:288)
backend_1            |  at com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:202)
backend_1            |  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:520)
backend_1            |  at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1390)
backend_1            |  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:362)
backend_1            |  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:195)
backend_1            |  at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
backend_1            |  at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593)
backend_1            |  at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3609)
backend_1            |  at io.micronaut.jackson.codec.JacksonMediaTypeCodec.decode(JacksonMediaTypeCodec.java:175)
backend_1            |  ... 30 common frames omitted

I tried adding a @OnError function which gets called, but at that point the websocket connection gets closed by ‘Abnormal Closure’ and it isnt very useful. So how would I got about handling the error/the validation without the entire thing dying?


Get this bounty!!!

#StackBounty: #javascript #jquery #validation #event-handling Submitting form jQuery checkboxlist

Bounty: 50

In my requestForm submit function I have this logic that checks three checkboxes and then displays the label with an error message. Is there better logic? I would be happy for any kind of feedback since this is quite critical a part of my application.

$("#requestForm").submit(function (e) {
            e.stopPropagation();
            e.preventDefault();


        var checked_sourcecheckboxes = $("#sources input[type=checkbox]:checked");
        if (checked_sourcecheckboxes.length == 0) {
            $("#lblSourcesError").show();
            return false;
        }
        else {
            $("#lblSourcesError").hide();
        }
        var checked_academicformatcheckboxes = $("#academicFormatsRow input[type=checkbox]:checked");
        if (checked_academicformatcheckboxes.length == 0) {
            $("#lblacademicFormatError").show();
            return false;
        }
        else {
            $("#lblacademicFormatError").hide();
        }
        var checked_academicmediaformatcheckboxes = $("#academicMediaFormatRow input[type=checkbox]:checked");
        if (checked_academicmediaformatcheckboxes.length == 0) {
            $("#lblacademicmediaFormatError").show();
            return false;
        }
        else {
            $("#lblacademicmediaFormatError").hide();
        }
})


Get this bounty!!!

#StackBounty: #spring #spring-boot #validation #exception #controller overridden handleMethodArgumentNotValid method of ResponseEntityE…

Bounty: 100

I am trying to have a custom validator and also an ExceptionHandler for my spring boot rest service and when I added ExceptionHandler, the validation errors are not being sent to the UI. So I tried to override handleMethodArgumentNotValid method and that does not work either. Can someone give some insight into this?

This is how I have configured my validation class in the controller –

package services.rest.controller;

import java.io.IOException;

import javax.validation.Valid;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.WebDataBinder;
import org.springframework.web.bind.annotation.InitBinder;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import lombok.extern.slf4j.Slf4j;
import services.rest.model.TestInput;
import services.rest.validator.DataValidator;

@RestController
@RequestMapping("/test")
@Slf4j
public class RestResource {

    @Autowired
    private DataValidator validator;

    @PostMapping("/create")
    public String create(@Valid final TestInput input) throws IOException {
        log.debug(input.toString());
        return "Success";
    }

    @InitBinder()
    public void init(final WebDataBinder binder) {
        binder.addValidators(validator);
    }

}

This is my ExceptionHandler code –

package services.rest.exceptionhandler;

import java.util.ArrayList;
import java.util.List;

import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.validation.ObjectError;
import org.springframework.web.bind.MethodArgumentNotValidException;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.context.request.WebRequest;
import org.springframework.web.servlet.mvc.method.annotation.ResponseEntityExceptionHandler;

@SuppressWarnings({ "unchecked", "rawtypes" })
@ControllerAdvice
public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {

    @ExceptionHandler(Exception.class)
    public final ResponseEntity<Object> handleAllExceptions(final Exception ex, final WebRequest request) {
        System.out.println("All exceptions Method getting executed!!!!");

        final List<String> details = new ArrayList<>();
        details.add(ex.getLocalizedMessage());
        return new ResponseEntity("Server Error", HttpStatus.INTERNAL_SERVER_ERROR);
    }

    @Override
    protected ResponseEntity<Object> handleMethodArgumentNotValid(final MethodArgumentNotValidException ex,
            final HttpHeaders headers, final HttpStatus status, final WebRequest request) {
        System.out.println("Validation Error Method getting executed!!!!");
        final List<String> details = new ArrayList<>();
        for (final ObjectError error : ex.getBindingResult().getAllErrors()) {
            details.add(error.getDefaultMessage());
        }
        return new ResponseEntity("Validation Error", HttpStatus.BAD_REQUEST);
    }
}

Initially did not override “handleMethodArgumentNotValid” method. Now after overriding it too, it does not work


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!

#StackBounty: #natural-language #model-evaluation #validation #rouge #bleu Shouldn't ROUGE-1 precision be equal to BLEU with w=(1, …

Bounty: 50

I am trying to evaluate a NLP model using BLEU and ROUGE. However, I am a bit confused about the difference between those scores. While I am aware that ROUGE is aimed at recall whilst BLEU measures precision, all ROUGE implementations I have come across also output precision and the F-score. The original ROUGE paper only briefly mentions precision and the F-score, therefore I am a bit unsure about what meaning they have to ROUGE. Is ROUGE mainly about recall and the precision and F-score are just added as a compliment, or is the ROUGE considered to be the combination of those three scores?

What confuses me even more is that to my understanding ROUGE-1 precision should be equal to BLEU when using the weights (1, 0, 0, 0), but that does not seem to be the case.
The only explanation I could have for this is the brevity penalty. However, I checked that the accumulated lengths of the references are shorter than the length of the hypothesis, which means that the brevity penalty is 1.
Nonetheless, BLEU with w = (1, 0, 0, 0) scores 0.55673 while ROUGE-1 precision scores 0.7249.

What am I getting wrong?

I am using nltk to evaluate BLEU and rouge-metric for ROUGE.

Disclaimer: I already posted this question on Data Science, however after not receiving any replies and doing some additional research on the differences between Data Science and Cross Validated, I figured that this question might be better suited for Cross Validated (correct me if I am wrong).


Get this bounty!!!