The world of art has always been mysterious, trying to achieve powerful meanings, or sometimes hide real-life secrets, messages, treasures, tributes, or techniques. Standardized music coding has the potential to facilitate in-depth interdisciplinary engagement and research, and transforming abstract musical information into machine-readable structures can facilitate the application of music in several fields. In the creative field, creators may hide musical information in image creation. In the field of psychology, researchers can share data to evaluate comparative research questions about how humans engage in music, specifically, by discussing human physiopsychological response data and physiological listener responses to musical elements, such as chords and notes. Note-level or chord-level encoding may contribute to researchers' understanding of musical meaning.
The need for digital music encoding standards for the development of advanced computational musicology was recognized as early as the 1960s [1]. One of the earliest and most widely used encoding languages was the ASCII-based Plaine and Easie code [2], which was used to encode incipits in the Répertoire International des Sources Musicales collection. The other was the Ford-Columbia Music Representation, subsequently known as DARMS [3], which was designed to facilitate both engraving and computer-aided music research. An important issue that was raised in these early years, and that we continue to grapple with today, is striking a balance between simple encodings that fulfil the needs of a single project and more complex encodings that can be used to generate a complete musical score [4]. One of the most significant developments in the 1980s was the development of the digital interface (MIDI) protocol [5, 6]. Although originally designed to send control messages between hardware music devices, it has also been extensively used by music researchers. Another significant development was the increased popularity of text-based music encoding languages designed explicitly for research, rather than engraving or sound generation, which belong to the category of human-readable coding languages [7]. The MuTaTeD’II project renders the standard music description language and the notation interchange file format into a more easily computer-processable representation [8]. A description of an earlier version of MusicXML was proposed in the same year [9]. Futrelle and Downie addressed the issue of music representation in their survey of interdisciplinary issues in the early years [10]. They highlighted some the early works on symbolic representations of non-Western classical music, such as the work by Lee et al. (2002) on Korean music [11] and Geekie on Carnatic Ragas [12]. Furthermore, in 2003, the first study of a digital edition project was conducted [13]. These projects have motivated the development of music-encoding formats. The music encoding initiative (MEI) is an XML-based schema initially developed by Roland, which allows for the explicit separation of the presentation and content aspects of musical documents [14]. In recent years, it has been extended to allow for full-scale document encoding [15], as well as integration into Verovio [16], a project for engraving musical scores in web browsers. MusicXML and MuseScore have also been used by researchers within the music information retrieval (MIR) community, but the development of encoding formats has taken place outside the MIR community and largely with the goal of facilitating robust transfer between different music notation programs or encoding formats. In recent years, one major advancement in encoding is the development of tools to integrate encoding formats into linked data representations of different types of musical information. However, historically, much of the work on linked data focuses on developing methods for linking different musical documents [17]. An exception is Fields et al., who explored integrating human annotations of musical forms into a linked data framework [18], and a more recent work has begun to incorporate note-level encodings into these frameworks, facilitating content-based search and indexing. An example of this is the music encoding and linked data framework [19–22], which uses MEI encoding and is integrated into the large-scale Towards Richer Online Music Public-domain Archives Project [23]. The work by Meroño-Peñuela et al., integrating MIDI encodings within a linked data framework, also provides the opportunity to link data to musical elements [24]. These projects may offer solutions to some of the issues that arise with the current datasets, although this work has predominantly produced protocols and tools as well as datasets with only document-level linkages [25]. This research proposes an innovative music coding method called Nutext, which is based on the logic design of numbered musical notation and has both human-readable and note-level format characteristics. In addition, Nutext is convertible and can be converted into a machine-readable MIDI format and MusicXML. Moreover, NuText can be converted to human-readable standard numbered musical notation.
Modern technologies have merged virtual reality with actual reality. However, is a lack of research discussing the use of steganography to embed music information in digital artwork images. The world of Non-Fungible Tokens (NFTs) has taken over the scene by converting digital formats such as JPEG, BMP, or TIFF into something uncopiable and unique [26]. This study proposes a note-level music information encoding method based on numbered musical notation that can hide music information in digital artwork creations or photos through steganography, making digital artwork creations or photos unique.