This generalizes the "arguments of correct type" and "default values of correct type" to a single rule "values of correct type" which has been re-written to rely on a traversal rather than the utility function `isValidLiteralValue`. To reduce breaking scope, this does not remove that utility even though it's no longer used directly within the library. Since the default values rule included another validation rule that rule was renamed to a more apt "variable default value allowed".
This also includes the original errors from custom scalars in the validation error output, solving the remainder of graphql/graphql-js#821.
ref: graphql/graphql-js#1144
This includes:
graphql/graphql-js#1147graphql/graphql-js#355
This also fixes two bugs in the Schema
- types that were not found where still added to the typeMap
- InputObject args should not be searched for types.
There are possibilities for errors during validation if a schema is not valid when provided to TypeInfo. Most checks for validity existed, but some did not. This asks flow to make those checks required and adds the remaining ones. Important now that we allow construction of invalid schema.
ref: graphql/graphql-js#1143
This is a fairly major refactoring of coerceValue which returns an Either so it can return a complete collection of errors. This allows originalError to be preserved for scalar coercion errors and ensures *all* errors are represented in the response.
This had a minor change to the logic in execute / subscribe to allow for buildExecutionContext to abrupt complete with multiple errors.
ref: graphql/graphql-js#1133
If a GraphQLError represents multiple nodes across files (could happen for validation across multiple parsed files) then the reported locations and printError output can be incorrect for the second node. This ensures locations are derived from nodes whenever possible to get correct location and amends comment documentation.
ref: graphql/graphql-js#1131
This moves validation out of GraphQLSchema's constructor (but not yet from other type constructors), which is responsible for root type validation and interface implementation checking.
Reduces time to construct GraphQLSchema significantly, shifting the time to validation.
This also allows for much looser rules within the schema builders, which implicitly validate while trying to adhere to flow types. Instead we use any casts to loosen the rules to defer that to validation where errors can be richer.
This also loosens the rule that a schema can only be constructed if it has a query type, moving that to validation as well. That makes flow typing slightly less nice, but allows for incremental schema building which is valuable
ref: graphql/graphql-js#1124
Lifted from / inspired by a similar change in graphql/graphql-js#722, this creates a new function `printError()` (and uses it as the implementation for `GraphQLError#toString()`) which prints location information in the context of an error.
This is moved from the syntax error where it used to be hard-coded, so it may now be used to format validation errors, value coercion errors, or any other error which may be associated with a location.
ref: graphql/graphql-js
BREAKING CHANGE: The SyntaxError message does not contain the codeframe anymore and only the message, (string) $error will print the codeframe.
This factors out the enum value validation from scalar value validation and ensures the same try/catch is used in isValidLiteralValue as isValidPHPValue and protecting from errors in valueFromAST.
ref: graphql/graphql-js#1126
A common case is encountering an error which blames to a single AST node. Ensure the GraphQLError constructor can handle this case.
ref: graphql/graphql-js#1123
This adds a new function `getIntrospectionQuery()` which allows for some minor configuration over the resulting query text: to exclude descriptions if your use case does not require them.
ref: graphql/graphql-js#1113
`OverlappingFieldsCanBeMerged` would infinite loop when passed something like
```graphql
fragment A on User {
name
...A
}
```
It's not `OverlappingFieldsCanBeMerged`'s responsibility to detect that validation error, but we still would ideally avoid infinite looping.
This detects that case, and pretends that the infinite spread wasn't there for the purposes of this validation step.
Also, by memoizing and checking for self-references this removes duplicate reports.
ref: graphql/graphql-js#1111
This improves the overlapping fields validation performance and improves error reporting quality by separating the concepts of checking fields "within" a single collection of fields from checking fields "between" two different collections of fields. This ensures for deeply overlapping fields that nested fields are not checked against each other repeatedly. Extending this concept further, fragment spreads are no longer expanded inline before looking for conflicts, instead the fields within a fragment are compared to the fields with the selection set which contained the referencing fragment spread.
e.g.
```graphql
{
same: a
same: b
...X
}
fragment X on T {
same: c
same: d
}
```
In the above example, the initial query body is checked "within" so `a` is compared to `b`. Also, the fragment `X` is checked "within" so `c` is compared to `d`. Because of the fragment spread, the query body and fragment `X` are checked "between" so that `a` and `b` are each compared to `c` and `d`. In this trivial example, no fewer checks are performed, but in the case where fragments are referenced multiple times, this reduces the overall number of checks (regardless of memoization).
**BREAKING**: This can change the order of fields reported when a conflict arises when fragment spreads are involved. If you are checking the precise output of errors (e.g. for unit tests), you may find existing errors change from `"a" and "c" are different fields` to `"c" and "a" are different fields`.
From a perf point of view, this is fairly minor as the memoization "PairSet" was already keeping these repeated checks from consuming time, however this will reduce the number of memoized hits because of the algorithm improvement.
From an error reporting point of view, this reports nearest-common-ancestor issues when found in a fragment that comes later in the validation process. I've added a test which fails with the existing impl and now passes, as well as changed a comment.
This also fixes an error where validation issues could be missed because of an over-eager memoization. I've also modified the `PairSet` to be aware of both forms of memoization, also represented by a previously failing test.
ref: graphql/graphql-js#386
This changes the parsing grammar and validation rules to more correctly implement the current state of the GraphQL SDL proposal (facebook/graphql#90)
ref: graphql/graphl-js#1102
ref: graphql/graphql-js#1000
BREAKING CHANGE: SchemaBuilder::build() and buildAST() and constructor
removed the typedecorator, as not needed anymore as library can now resolve
union and interfaces from generated schemas.
This changes the check for null/undefined to a check for undefined to determine if scalar serialization was successful or not, allowing `null` to be returned from serialize() without indicating error.
This is potentially breaking for any existing custom scalar which returned `null` from `serialize()` to indicate failure. To account for this change, it should either throw an error or return `undefined`.
ref: graphql/graphql-js#1104
* Generalizes building a value from an AST, since "scalar" could be misleading, and supporting variable values within custom scalar literals can be valuable.
* Replaces isNullish with isInvalid since `null` is a meaningful value as a result of literal parsing.
* Provide reasonable default version of 'parseLiteral'
ref: 714ee980aa
ref: https://github.com/graphql/graphql-js/pull/903
# Conflicts:
# src/Utils/BuildSchema.php
# tests/Utils/BuildSchemaTest.php
* Adds support for resolving union/interface types when using a generated schema
* Move resolveType __typename checking into defaultResolveType
* Clean up existing tests and improve error messages
ref: graphql/graphql-js#947
# Conflicts:
# src/Utils/BuildSchema.php
# tests/Utils/BuildSchemaTest.php
As discussed in facebook/graphql#90
This proposes replacing leading comment blocks as descriptions in the schema definition language with leading strings (typically block strings).
While I think there is some reduced ergonomics of using a string literal instead of a comment to write descriptions (unless perhaps you are accustomed to Python or Clojure), there are some compelling advantages:
* Descriptions are first-class in the AST of the schema definition language.
* Comments can remain "ignored" characters.
* No ambiguity between commented out regions and descriptions.
Specific to this reference implementation, since this is a breaking change and comment descriptions in the experimental SDL have fairly wide usage, I've left the comment description implementation intact and allow it to be enabled via an option. This should help with allowing upgrading with minimal impact on existing codebases and aid in automated transforms.
BREAKING CHANGE: This does not parse descriptions from comments by default anymore and the value of description in Nodes changed from string to StringValueNode
This RFC adds a new form of `StringValue`, the multi-line string, similar to that found in Python and Scala.
A multi-line string starts and ends with a triple-quote:
```
"""This is a triple-quoted string
and it can contain multiple lines"""
```
Multi-line strings are useful for typing literal bodies of text where new lines should be interpretted literally. In fact, the only escape sequence used is `\"""` and `\` is otherwise allowed unescaped. This is beneficial when writing documentation within strings which may reference the back-slash often:
```
"""
In a multi-line string \n and C:\\ are unescaped.
"""
```
The primary value of multi-line strings are to write long-form input directly in query text, in tools like GraphiQL, and as a prerequisite to another pending RFC to allow docstring style documentation in the Schema Definition Language.
Ref: graphql/graphql-js#926
This revert #202 (commit 9d37f4c) because trying to parse PSR-7 request
was a mistake. The whole point of PSR-7 is to allow for interoperability
and be able to use specialized libs for body parsing (amongst many other
things). Trying to parse ourselves would be opening a can of worm if/when
other content types have to be supported. It is more correct and future
safe to require that the body is parsed before being passed to GraphQL.
Because PSR-7 specification only specify that `getParsedBody()` **may**
return the parsed body for `application/json`, we cannot assume that it
is always the case. So if the value returned parsed body is an empty array,
it means we should try to parse it ourselves (`null` would mean no body at
all according to spec).
With this modification we try to used given parsed body, but fallback on
trying to parse the body if necessary. This leave the door open to custom
implementation of parsing if needed, while making it easier to use out of
the box.
specification.
The framework currently coerces query variables similar to the way it
treats output values, which means it attempts to coerce the value into
the field's corresponding data type regardless of the received value.
According to items 3f and 3g in section 6.1.2
(http://facebook.github.io/graphql/#sec-Validating-Requests) of
Facebook's GraphQL specification query variables should be coerced
according to their type's input coercion rules laid out in section
3.1.1 (http://facebook.github.io/graphql/#sec-Scalars). If the value
can not be coerced into the correct type according the the input
coercion rules for the type a query error should be thrown. This
ensures that client provided query variables were of the correct format
and will be a valid format and type by the time they are passed into an
implementing resolver.
This patch fixes the above issue by updating the way query variables
are sanitized during the process of parsing the query. It directly
follows the rules for scalar input coercion laid out by the
specification and throws query errors when a value that cannot be
coerced to the correct type is given. Tests for isValidPHPValue will
also be updated to ensure that it is doing the correct type checks on
Values::isValidPHPValue for the given type and value provided. A new
test case will also be added to test Values::getVariableValues and make
sure it is also enforcing the scalar input coercion rules and throwing
errors for invalid values.