Transition to server side development and departure from standard pipeline

(Polygon Pictures Inc. / Studio Phones)
We plan to continue making additions and emendations based on the situation at seminars hereafter.
translated by PPI Translation Team

■Overview


In this session we will hold a discussion on modeling on the server side, however in these materials I would also like to sum up a number of thoughts in order to discuss the background to this transition to server-side development and departure from standard pipelines, and how development is expected to be different than in the past.
In addition, I would like to clarify the various issues which have occurred when making the processes of an actual, existing pipeline able to be executed on the server side.


■Background


In recent years, as features available in commercial DCC tools continue to grow stronger, it seems that many studios no longer need all of the features which have been developed. On the other hand, a pattern has been appearing in which feature development itself has been proceeding internally for development of in-house plugins which are based on those commercial DCC tools, in a format which matches the studio workflow and the title being worked on, while using the interface which artists are used to from those commercial DCC tools. I believe that many studios are searching for infrastructure and pipeline solutions where in-house feature development is, as much as possible, deployed on the server side, and then supplied to the artists effectively as microservices.
Also, among some studios who have proceeded with this kind of server-side environment setup beyond a certain level, a style of operation seems to be becoming more common in which all features are deployed on the server, with an artist-oriented interface provided via web browser, with artist work data and assets uploaded to storage and sent to an in-house rendering engine via a web browser console.

This kind of design uses the comparatively advanced technology of a development environment based on general IT technologies or web applications, rather than a video production studio; however the switch in video production pipeline to web browser-based interfaces feels like a natural progression.
So, by separating this issue into the expansion of features on the server side, and web browser interface setup, we can have development focused on preparing an interface for the artist side which isolates pure operability only, and I believe we are in the stages of starting to grasp at how to transition to this kind of work style, or just starting to put this into practice.
There are a number of motivations for us to move toward this kind of production style, but there are different factors that each studio might consider, such as a heightened cost in transmitting assets from the asset server to the artist’s console, or ease of administering libraries, etc., using various features built in-house.
The point which could be considered the greatest departure from existing production workflows, but perhaps one of the largest merits, is not having to force a standardized pipeline on artists or between partnered studios, and instead deploy features to the server without worrying about standardization, allowing us to build a flexible working style for the artists who access these tools, giving us further freedom in the construction of a production pipeline.


■Standardization of pipeline and its disadvantages


Within the process of building an efficient segmented production workflow and the pipeline which supports this, care towards running costs of artists moving between studios which is highly fluid, and also for the benefit of the engineers actually developing the pipeline, it is the belief of the moderators that pipelines and workflows up until now were created with a certain level of standardization.
While a number of finer points may differ, it can be thought that pipeline construction tends to follow a certain degree of shared understandings which could be said to be “normalization”, and I believe this has been a barrier to studios developing their own unique characteristics as a studio.
In other words, while of course there is a requirement for a set of standard rules in order to work efficiently when coordinating with another company, it can also be thought that as the number of studios compartmentalizing their workflows has increased, these restrictions have tended to hypertrophy, and based on this natural progression, have tended to head toward the building of normalized systems.
Regarding data transfer for assets, etc., compared to the paradigm wherein those restrictions have become ever stronger, the paradigm wherein features are deployed server side tends to lead to a lightening of restrictions and limitations. Handling assets on the server via web browser, being able to call up features freely as microservices, then after work is complete, simply storing the file in another location on the server, the freedom here is in supplying an interface sufficient for the artists’ creative work via web browser, while developers can focus on development of the features to be deployed on the servers; this is the belief of the moderators. I would like to hold further discussions on this during our seminars.


■DCC tools and pipeline


The pipeline in a standard video production mainly uses commercial or perhaps OSS DCC tools. I believe this can take a number of forms, from simply using the asset management and pipeline tools pre-packaged with those DCC tools, up to developing in-house tools and plugins to fit the studio’s workflow, and depending on the studio there are also parts where unique developments can be made to fit that studio’s workflow.
Data transmitted over these pipelines can also be in the proprietary file formats of those DCC tools, to the shared, open-source project data storage file formats such as OpenEXR, Alembic, or USD which have appeared in recent years. In this way the exchange of data between DCC tools can also be thought to have become standardized.
However, one thing that can be said to be in common, is that as convenient features have been added to the DCC tools used every day, more of the processes involved in the production have come to be done in those DCC tools, and as the dependence on these tools grows, pipeline constructions has tended to follow these DCC tools’ frameworks and API specifications.
Also, by further matching specifications on the DCC tool-side, it becomes necessary to update the pipeline, and occasionally this direction and workflow may differ from what the studio wishes, in some cases increasing unintended maintenance costs. Particularly in studios constructing large-scale pipelines, this can lead to increased costs for work relating to these updates.
On the other hand, these suites of DCC tools are generally assumed to be running on a single client machine environment, and recently as good performance has been demonstrated using server infrastructure on cloud services, it seems there are a lot of potential issues to be discussed. In looking into server-side development taking cloud infrastructure into account, changes to the setup need to be made in order to have the kinds of processes performed internally in existing DCC tools broken down and run on the infrastructure as microservices. Assuming that DCC tools will be used also adds difficulty in addressing this; while reviewing pipeline planning, regarding the parts that are dependent on DCC tools, it is this moderator’s belief that we will need to develop these as different features.


■Server infrastructure and micro service


In cloud infrastructure, virtualized or containerized environments are the standard, so this can be said to be an environment in which it is easy to run processes designed as microservices. By turning these into microservices, I believe we can feel the merit of cloud infrastructure in this flexible scalability, but as we move toward developing the pipeline on the server side, I believe we will need to take the characteristics of these infrastructures into consideration.
In order to implement various features of DCC tools in a single application, launching the application will take time, and even just launching will require a lot of resources in most cases. In a standard pipeline, as the DCC tools are run on a client machine used by the artist, this will not be a problem; but in the case it is run on a container on the server infrastructure, since the scale of a single application is so large, from the perspective of the infrastructure on the server side, if it would be possible to switch to a format that is lighter to execute I believe we could achieve better performance server side.
However, isolating and customizing the features found in existing DCC tools and serving these to the user side would be difficult, even using the DCC tools’ APIs, and handling this in a form suitable to server infrastructure, such as a container, would be difficult via the DCC tools, and so at present in-house development may be the only option.
If it is possible to package and serve these features freely as compact microservices, it would be easy to gain the scalability merits of render clusters through containers on cloud infrastructure, and use a large amount of render resources at once within a short period of time; and for processes which have limitations on how much they can be performed on single workstations - even high-powered machines, it would be possible to build an infrastructure system where the excess render resources are allocated and run as necessary.
Also on the data I/O side, we need to consider the particular characteristics of storage on the cloud. Most I/O with DCC tools happens based on file paths mounted on a file system, but as cloud storage tends to use object storage with a RESTful interface, we will need to redesign the entire data I/O blueprint.
Therefore, simply running an existing standard pipeline as is on the server side makes it difficult to extract the advantages of cloud infrastructure, and requires us to change the fundamental setup to one which assumes server-side execution, which requires rethinking both our know-how of existing systems and the tools, and from the perspective of moving away from the standard pipeline, I believe we will move to a format of not using DCC tools and see an increase in in-house development.


■User interface


In a standard pipeline, so that numerous artists can work with a familiar interface, while depending on the DCC tools’ APIs, as a UI something that is planned to be generic with a broad array of features is desirable. While these have been developed over a long time in order to meet artists’ demands, due to the high level of freedom from implementing these as a client application, these have tended toward the creation of complicated UIs.
On the other hand, with development on the server side, as the UI/UX is based on a browser, and also as the interface and its interactions are handled over the network, it requires paying attention to process logic such as time lag and synchronization processes, etc., which did not require much attention with standalone applications. Also, due to advances in Javascript and HTML5, etc., the environment to create an advanced UI has been developed, but the difficulty in creating a UI in a web browser with the same level of features as a standalone application itself is high, and from the point-of-view of an artist used to working on the interfaces in a standard pipeline, it may feel the UI lacks functionality.
However, in a standard pipeline as the specifications of the API differs with each DCC tool, a separate UI for each DCC tool needs to be developed, including maintenance each time the DCC tool version is updated, which increases costs dramatically.

Compared to this, by proceeding with development in-house on the server infrastructure for the individual features, we are free to delineate planning of features and interface, and for the interface in order to call up and use features which have been turned into microservices, by supplying a UI/UX which meets the minimum requirements, I believe we can simplify this in comparison to the interfaces used in DCC tools.
These changes have the benefit of building a simple system in which only the UI and features to be run in the container are developed and deployed, and depending on the situation it may also lower development and maintenance costs.
If we do consider development on the server side, I believe it will be important to build this while considering the balance of the merits of server side development and implementation, without assuming that we will be supplying this to the artists with the same kind of UI as in DCC tools up until now. I believe we will need to consider what would be the most appropriate browser-based UI/UX, and discuss this with the members of these seminars.