Semantic studies of sign language have led to two general claims. First, in some cases sign languages make visible some crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables or ‘indices’, but the latter are covert in spoken language. Second, along one dimension sign languages are strictly more expressive than spoken languages because iconic phenomena can be found at their logical core. This applies to loci themselves, which may simultaneously function as logical variables and as simplified pictures of what they denote. As a result, the semantic system of spoken languages can in some respects be seen as a ‘degenerate’ version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (Goldin-Meadow and Brentari 2015). In order to address this debate, one must compare a semantics with iconicity for sign language to a semantics with co-speech gestures for spoken language. We will sketch such a comparison, focusing on the assertive vs. non-assertive status of iconic/gestural enrichments in each modality.