About OpenPlay Technology Pvt. Ltd.
OpenPlay is a leading multi-player skill-based gaming company in the real money gaming space and is part of the Nazara
Group. Founded by Sreeram Reddy Vanga in 2011, the company has achieved a rare dual feat of consistent growth with
robust profitability over the years. With a team of over 100 passionate and creative people, OpenPlay provides the
perfect atmosphere for creativity and development for its people through a culture of empowerment, openness and
collective ownership.
The Indian gaming industry is poised to grow at a phenomenal growth rate of over 25% to be a $5B industry from $1.5B
currently. RMG space is the biggest contributor to the gaming business with a contribution of over 40%.
Our flagship brand of Classic Rummy is based on the traditional 13 card rummy and is redefining gaming experience for
the Indian audience while being fully compliant with the word and spirit of the Indian legal requirements and responsible
gaming. Open play is aiming to be India’s largest vernacular social gaming platform on the back of product innovation and
data driven player lifecycle management. The company with over 1million + players was recently acquired by Nazara
Technologies Ltd. in 2021. To know more about us, please log on to https://www.openplaytech.com/
About Nazara Technologies Pvt. Ltd.
Nazara is a leading diversified gaming & sports media platform with presence in India, Africa and North America. It is the
only listed gaming entity in the country and has a market capitalization of over USD 600M. Nazara has built the most
diverse gaming ecosystem across interactive gaming, eSports, RMG, ad-tech and gamified early learning. It’s subsidiaries
include Nodwin Gaming, Sportskeeda, Paperboat Apps Private Limited, Datawrkz, WildWorks among numerous others.
Position: Data Engineer
Key Job Responsibilities:
• Develop, construct, test, and maintain architectures such as large-scale processing systems, and other
infrastructure using tools and technologies such as Hadoop, Spark, and Kafka.
• Assemble large, complex data sets that meet functional / non-functional business requirements, utilizing SQL,
NoSQL, and cloud-based data warehousing solutions such as Amazon Redshift or Google BigQuery.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data
delivery, re-designing infrastructure for greater scalability, etc., using Python and PySpark.
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety
of data sources using SQL and cloud-based ‘big data’ technologies like Apache Flink.
• Create and design Data Lake with AWS Services like S3, Lambda, Step Functions, Cloud Formation, Kinesis,
Firehose.
• Working knowledge of Apache Superset and Airbyte.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related
technical issues and support their data engineering needs using data visualization tools like Tableau, Superset,
Indicative
• Create data tools for analytics and data engineering team members that assist them in building and optimizing
our product into an innovative industry leader, using machine learning frameworks like TensorFlow, PyTorch,
or Scikit-Learn.
• Create segmentations using graph databases and knowledge with tools like Neo4j, AWS Neptune or GraphQL.
• Document and communicate data engineering solutions and methodologies using tools like Confluence, Jira.
• Assess and implement new data engineering tools and technologies to stay ahead in the industry.
• Troubleshoot and solve issues in the data engineering infrastructure using monitoring and debugging tools like
ELK Stack.
Key Requirements:
• The candidate should have at least 3-5 years of experience in building AWS Data platform Solutions or
relative BI Experience.
• Experience in building and optimizing data pipelines using AWS/DBT ETL tools.
• Experience with Superset, Airbyte, AWS S3, Lambda, Step Functions, AWS Redshift
• Experience in designing and deploying data platforms for supporting various data use cases including batch
analysis, real-time analytics.
• Experience using Python, PySpark, Kinesis, Firehose, Kafka for real time streaming analytical solutions.
• Experience in Data governance frameworks.
• Experience in MySQL and Python.
• Experience in Optimizations of queries.
• Good understanding of Data warehousing concepts.
Good to Have:
• Experience on Tableau, Hadoop, AI/ML Data modelling.
• Experience in Hive, Hbase, EMR clusters.
Location: Hyderabad